linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/9] faster augmented rbtree interface
@ 2012-08-20 22:05 Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 1/9] rbtree test: fix sparse warning about 64-bit constant Michel Lespinasse
                   ` (8 more replies)
  0 siblings, 9 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

These are my proposed changes for a faster augmented rbtree interface.

Patches 1-8 are unchanged from my v2 send (in v2 they were called
patches 1 and 3-9 - patch 2 from v2 already got applied into
andrew's -mm tree). Patch 9 wasn't part of the original v2 send,
I had posted it later on as a reply to v2's patch 8/9 following a suggestion
from Peter Zijlstra. Anyway, Andrew asked me to apply the patches over
his current tree, gather the Acked-By and Reviewed-By lines from the previous
send, and resend publically, so here goes.

As noted in v2:

Patch 1 is a trivial fix for a sparse warning.

Patches 2-3 are small cleanups, mainly intended to make the code more readable.

Patches 4-5 are new (well they were in v2), based on something George
Spelvin observed in my previous RFC. It turns out that in rb_erase(),
recoloring is trivial for nodes that have exactly 1 child. We can
shave a few cycles by handling it locally, and changing
rb_erase_color() to only deal with the no-childs case.

Patch 6 adds a performance test for the augmented rbtree support.

Patch 7 introduces my proposed API for augmented rbtree support.
rb_insert_augmented() and rb_erase_augmented() are augmented versions of
rb_insert_color() and rb_erase(). They take an additional argument
(struct rb_augment_callbacks) to specify callbacks to be used to maintain
the augmented rbtree information. users have to specify 3 callbacks
through that structure. Non-augmented rbtree support is provided by
inlining dummy callbacks, so that the non-augmented case is not affected
(either in speed or in compiled size) by the new augmented rbtree API.
For augmented rbtree users, no inlining takes place at this point (I may
propose this later, but feel this shouldn't go with the initial proposal).

Patch 8 removes the old augmented rbtree interface and converts its
only user to the new interface.

Patch 9 adds an RB_DECLARE_CALLBACKS() macro so that people don't have to
write the 3 small callback functions for each augmented rbtree usage.


Overall, this series improves non-augmented rbtree speed by ~5%. For
augmented rbtree users, the new interface is ~2.5 times faster than the old.

Michel Lespinasse (9):
  rbtree test: fix sparse warning about 64-bit constant
  rbtree: add __rb_change_child() helper function
  rbtree: place easiest case first in rb_erase()
  rbtree: handle 1-child recoloring in rb_erase() instead of
    rb_erase_color()
  rbtree: low level optimizations in rb_erase()
  rbtree: augmented rbtree test
  rbtree: faster augmented rbtree manipulation
  rbtree: remove prior augmented rbtree implementation
  rbtree: add RB_DECLARE_CALLBACKS() macro

 Documentation/rbtree.txt |  190 +++++++++++++++++++++-----
 arch/x86/mm/pat_rbtree.c |   32 ++---
 include/linux/rbtree.h   |   53 ++++++-
 lib/rbtree.c             |  352 +++++++++++++++++++++++++---------------------
 lib/rbtree_test.c        |  105 +++++++++++++-
 5 files changed, 513 insertions(+), 219 deletions(-)

-- 
1.7.7.3

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/9] rbtree test: fix sparse warning about 64-bit constant
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 2/9] rbtree: add __rb_change_child() helper function Michel Lespinasse
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

Just a small fix to make sparse happy.

Signed-off-by: Michel Lespinasse <walken@google.com>
Reported-by: Fengguang Wu <wfg@linux.intel.com>
Acked-by: Rik van Riel <riel@redhat.com>

---
 lib/rbtree_test.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index 19dfca9ff7d7..fd09465d82ca 100644
--- a/lib/rbtree_test.c
+++ b/lib/rbtree_test.c
@@ -88,7 +88,7 @@ static int rbtree_test_init(void)
 
 	printk(KERN_ALERT "rbtree testing");
 
-	prandom32_seed(&rnd, 3141592653589793238);
+	prandom32_seed(&rnd, 3141592653589793238ULL);
 	init();
 
 	time1 = get_cycles();
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/9] rbtree: add __rb_change_child() helper function
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 1/9] rbtree test: fix sparse warning about 64-bit constant Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:17   ` Andrew Morton
  2012-08-20 22:05 ` [PATCH v3 3/9] rbtree: place easiest case first in rb_erase() Michel Lespinasse
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

Add __rb_change_child() as an inline helper function to replace code that
would otherwise be duplicated 4 times in the source.

No changes to binary size or speed.

Signed-off-by: Michel Lespinasse <walken@google.com>
Reviewed-by: Rik van Riel <riel@redhat.com>

---
 lib/rbtree.c |   46 +++++++++++++++++-----------------------------
 1 files changed, 17 insertions(+), 29 deletions(-)

diff --git a/lib/rbtree.c b/lib/rbtree.c
index 61cdd0e3e538..de89a614b1ba 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -66,6 +66,19 @@ static inline struct rb_node *rb_red_parent(struct rb_node *red)
 	return (struct rb_node *)red->__rb_parent_color;
 }
 
+static inline void
+__rb_change_child(struct rb_node *old, struct rb_node *new,
+		  struct rb_node *parent, struct rb_root *root)
+{
+	if (parent) {
+		if (parent->rb_left == old)
+			parent->rb_left = new;
+		else
+			parent->rb_right = new;
+	} else
+		root->rb_node = new;
+}
+
 /*
  * Helper function for rotations:
  * - old's parent and color get assigned to new
@@ -78,13 +91,7 @@ __rb_rotate_set_parents(struct rb_node *old, struct rb_node *new,
 	struct rb_node *parent = rb_parent(old);
 	new->__rb_parent_color = old->__rb_parent_color;
 	rb_set_parent_color(old, new, color);
-	if (parent) {
-		if (parent->rb_left == old)
-			parent->rb_left = new;
-		else
-			parent->rb_right = new;
-	} else
-		root->rb_node = new;
+	__rb_change_child(old, new, parent, root);
 }
 
 void rb_insert_color(struct rb_node *node, struct rb_root *root)
@@ -375,13 +382,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 		while ((left = node->rb_left) != NULL)
 			node = left;
 
-		if (rb_parent(old)) {
-			if (rb_parent(old)->rb_left == old)
-				rb_parent(old)->rb_left = node;
-			else
-				rb_parent(old)->rb_right = node;
-		} else
-			root->rb_node = node;
+		__rb_change_child(old, node, rb_parent(old), root);
 
 		child = node->rb_right;
 		parent = rb_parent(node);
@@ -410,13 +411,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 
 	if (child)
 		rb_set_parent(child, parent);
-	if (parent) {
-		if (parent->rb_left == node)
-			parent->rb_left = child;
-		else
-			parent->rb_right = child;
-	} else
-		root->rb_node = child;
+	__rb_change_child(node, child, parent, root);
 
 color:
 	if (color == RB_BLACK)
@@ -591,14 +586,7 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new,
 	struct rb_node *parent = rb_parent(victim);
 
 	/* Set the surrounding nodes to point to the replacement */
-	if (parent) {
-		if (victim == parent->rb_left)
-			parent->rb_left = new;
-		else
-			parent->rb_right = new;
-	} else {
-		root->rb_node = new;
-	}
+	__rb_change_child(victim, new, parent, root);
 	if (victim->rb_left)
 		rb_set_parent(victim->rb_left, new);
 	if (victim->rb_right)
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/9] rbtree: place easiest case first in rb_erase()
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 1/9] rbtree test: fix sparse warning about 64-bit constant Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 2/9] rbtree: add __rb_change_child() helper function Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:28   ` Andrew Morton
  2012-08-20 22:05 ` [PATCH v3 4/9] rbtree: handle 1-child recoloring in rb_erase() instead of rb_erase_color() Michel Lespinasse
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

In rb_erase, move the easy case (node to erase has no more than
1 child) first. I feel the code reads easier that way.

Signed-off-by: Michel Lespinasse <walken@google.com>
Reviewed-by: Rik van Riel <riel@redhat.com>

---
 lib/rbtree.c |   35 ++++++++++++++++++-----------------
 1 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/lib/rbtree.c b/lib/rbtree.c
index de89a614b1ba..bde1b5c5fb33 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -368,17 +368,28 @@ static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
 
 void rb_erase(struct rb_node *node, struct rb_root *root)
 {
-	struct rb_node *child, *parent;
+	struct rb_node *child = node->rb_right, *tmp = node->rb_left;
+	struct rb_node *parent;
 	int color;
 
-	if (!node->rb_left)
-		child = node->rb_right;
-	else if (!node->rb_right)
-		child = node->rb_left;
-	else {
+	if (!tmp) {
+	case1:
+		/* Case 1: node to erase has no more than 1 child (easy!) */
+
+		parent = rb_parent(node);
+		color = rb_color(node);
+
+		if (child)
+			rb_set_parent(child, parent);
+		__rb_change_child(node, child, parent, root);
+	} else if (!child) {
+		/* Still case 1, but this time the child is node->rb_left */
+		child = tmp;
+		goto case1;
+	} else {
 		struct rb_node *old = node, *left;
 
-		node = node->rb_right;
+		node = child;
 		while ((left = node->rb_left) != NULL)
 			node = left;
 
@@ -402,18 +413,8 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 		node->__rb_parent_color = old->__rb_parent_color;
 		node->rb_left = old->rb_left;
 		rb_set_parent(old->rb_left, node);
-
-		goto color;
 	}
 
-	parent = rb_parent(node);
-	color = rb_color(node);
-
-	if (child)
-		rb_set_parent(child, parent);
-	__rb_change_child(node, child, parent, root);
-
-color:
 	if (color == RB_BLACK)
 		__rb_erase_color(child, parent, root);
 }
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 4/9] rbtree: handle 1-child recoloring in rb_erase() instead of rb_erase_color()
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
                   ` (2 preceding siblings ...)
  2012-08-20 22:05 ` [PATCH v3 3/9] rbtree: place easiest case first in rb_erase() Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 5/9] rbtree: low level optimizations in rb_erase() Michel Lespinasse
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

An interesting observation for rb_erase() is that when a node has
exactly one child, the node must be black and the child must be red.
An interesting consequence is that removing such a node can be done by
simply replacing it with its child and making the child black,
which we can do efficiently in rb_erase(). __rb_erase_color() then
only needs to handle the no-childs case and can be modified accordingly.

Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>

---
 lib/rbtree.c |  105 ++++++++++++++++++++++++++++++++++------------------------
 1 files changed, 62 insertions(+), 43 deletions(-)

diff --git a/lib/rbtree.c b/lib/rbtree.c
index bde1b5c5fb33..80b092538fa9 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -2,7 +2,8 @@
   Red Black Trees
   (C) 1999  Andrea Arcangeli <andrea@suse.de>
   (C) 2002  David Woodhouse <dwmw2@infradead.org>
-  
+  (C) 2012  Michel Lespinasse <walken@google.com>
+
   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 2 of the License, or
@@ -50,6 +51,11 @@
 #define rb_is_red(r)   (!rb_color(r))
 #define rb_is_black(r) rb_color(r)
 
+static inline void rb_set_black(struct rb_node *rb)
+{
+	rb->__rb_parent_color |= RB_BLACK;
+}
+
 static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
 {
 	rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;
@@ -214,27 +220,18 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 }
 EXPORT_SYMBOL(rb_insert_color);
 
-static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
-			     struct rb_root *root)
+static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 {
-	struct rb_node *sibling, *tmp1, *tmp2;
+	struct rb_node *node = NULL, *sibling, *tmp1, *tmp2;
 
 	while (true) {
 		/*
-		 * Loop invariant: all leaf paths going through node have a
-		 * black node count that is 1 lower than other leaf paths.
-		 *
-		 * If node is red, we can flip it to black to adjust.
-		 * If node is the root, all leaf paths go through it.
-		 * Otherwise, we need to adjust the tree through color flips
-		 * and tree rotations as per one of the 4 cases below.
+		 * Loop invariants:
+		 * - node is black (or NULL on first iteration)
+		 * - node is not the root (parent is not NULL)
+		 * - All leaf paths going through parent and node have a
+		 *   black node count that is 1 lower than other leaf paths.
 		 */
-		if (node && rb_is_red(node)) {
-			rb_set_parent_color(node, parent, RB_BLACK);
-			break;
-		} else if (!parent) {
-			break;
-		}
 		sibling = parent->rb_right;
 		if (node != sibling) {	/* node == parent->rb_left */
 			if (rb_is_red(sibling)) {
@@ -268,17 +265,22 @@ static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
 					 *      / \           / \
 					 *     Sl  Sr        Sl  Sr
 					 *
-					 * This leaves us violating 5), so
-					 * recurse at p. If p is red, the
-					 * recursion will just flip it to black
-					 * and exit. If coming from Case 1,
-					 * p is known to be red.
+					 * This leaves us violating 5) which
+					 * can be fixed by flipping p to black
+					 * if it was red, or by recursing at p.
+					 * p is red when coming from Case 1.
 					 */
 					rb_set_parent_color(sibling, parent,
 							    RB_RED);
-					node = parent;
-					parent = rb_parent(node);
-					continue;
+					if (rb_is_red(parent))
+						rb_set_black(parent);
+					else {
+						node = parent;
+						parent = rb_parent(node);
+						if (parent)
+							continue;
+					}
+					break;
 				}
 				/*
 				 * Case 3 - right rotate at sibling
@@ -339,9 +341,15 @@ static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
 					/* Case 2 - sibling color flip */
 					rb_set_parent_color(sibling, parent,
 							    RB_RED);
-					node = parent;
-					parent = rb_parent(node);
-					continue;
+					if (rb_is_red(parent))
+						rb_set_black(parent);
+					else {
+						node = parent;
+						parent = rb_parent(node);
+						if (parent)
+							continue;
+					}
+					break;
 				}
 				/* Case 3 - right rotate at sibling */
 				sibling->rb_right = tmp1 = tmp2->rb_left;
@@ -369,23 +377,31 @@ static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
 void rb_erase(struct rb_node *node, struct rb_root *root)
 {
 	struct rb_node *child = node->rb_right, *tmp = node->rb_left;
-	struct rb_node *parent;
-	int color;
+	struct rb_node *parent, *rebalance;
 
 	if (!tmp) {
-	case1:
-		/* Case 1: node to erase has no more than 1 child (easy!) */
+		/*
+		 * Case 1: node to erase has no more than 1 child (easy!)
+		 *
+		 * Note that if there is one child it must be red due to 5)
+		 * and node must be black due to 4). We adjust colors locally
+		 * so as to bypass __rb_erase_color() later on.
+		 */
 
 		parent = rb_parent(node);
-		color = rb_color(node);
-
-		if (child)
-			rb_set_parent(child, parent);
 		__rb_change_child(node, child, parent, root);
+		if (child) {
+			rb_set_parent_color(child, parent, RB_BLACK);
+			rebalance = NULL;
+		} else {
+			rebalance = rb_is_black(node) ? parent : NULL;
+		}
 	} else if (!child) {
 		/* Still case 1, but this time the child is node->rb_left */
-		child = tmp;
-		goto case1;
+		parent = rb_parent(node);
+		__rb_change_child(node, tmp, parent, root);
+		rb_set_parent_color(tmp, parent, RB_BLACK);
+		rebalance = NULL;
 	} else {
 		struct rb_node *old = node, *left;
 
@@ -397,26 +413,29 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 
 		child = node->rb_right;
 		parent = rb_parent(node);
-		color = rb_color(node);
 
 		if (parent == old) {
 			parent = node;
 		} else {
-			if (child)
-				rb_set_parent(child, parent);
 			parent->rb_left = child;
 
 			node->rb_right = old->rb_right;
 			rb_set_parent(old->rb_right, node);
 		}
 
+		if (child) {
+			rb_set_parent_color(child, parent, RB_BLACK);
+			rebalance = NULL;
+		} else {
+			rebalance = rb_is_black(node) ? parent : NULL;
+		}
 		node->__rb_parent_color = old->__rb_parent_color;
 		node->rb_left = old->rb_left;
 		rb_set_parent(old->rb_left, node);
 	}
 
-	if (color == RB_BLACK)
-		__rb_erase_color(child, parent, root);
+	if (rebalance)
+		__rb_erase_color(rebalance, root);
 }
 EXPORT_SYMBOL(rb_erase);
 
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 5/9] rbtree: low level optimizations in rb_erase()
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
                   ` (3 preceding siblings ...)
  2012-08-20 22:05 ` [PATCH v3 4/9] rbtree: handle 1-child recoloring in rb_erase() instead of rb_erase_color() Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 6/9] rbtree: augmented rbtree test Michel Lespinasse
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

Various minor optimizations in rb_erase():
- Avoid multiple loading of node->__rb_parent_color when computing parent
  and color information (possibly not in close sequence, as there might
  be further branches in the algorithm)
- In the 1-child subcase of case 1, copy the __rb_parent_color field from
  the erased node to the child instead of recomputing it from the desired
  parent and color
- When searching for the erased node's successor, differentiate between
  cases 2 and 3 based on whether any left links were followed. This avoids
  a condition later down.
- In case 3, keep a pointer to the erased node's right child so we don't
  have to refetch it later to adjust its parent.
- In the no-childs subcase of cases 2 and 3, place the rebalance assigment
  last so that the compiler can remove the following if(rebalance) test.

Also, added some comments to illustrate cases 2 and 3.

Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>

---
 lib/rbtree.c |   98 ++++++++++++++++++++++++++++++++++++++--------------------
 1 files changed, 64 insertions(+), 34 deletions(-)

diff --git a/lib/rbtree.c b/lib/rbtree.c
index 80b092538fa9..938061ecbe61 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -47,9 +47,14 @@
 #define	RB_RED		0
 #define	RB_BLACK	1
 
-#define rb_color(r)   ((r)->__rb_parent_color & 1)
-#define rb_is_red(r)   (!rb_color(r))
-#define rb_is_black(r) rb_color(r)
+#define __rb_parent(pc)    ((struct rb_node *)(pc & ~3))
+
+#define __rb_color(pc)     ((pc) & 1)
+#define __rb_is_black(pc)  __rb_color(pc)
+#define __rb_is_red(pc)    (!__rb_color(pc))
+#define rb_color(rb)       __rb_color((rb)->__rb_parent_color)
+#define rb_is_red(rb)      __rb_is_red((rb)->__rb_parent_color)
+#define rb_is_black(rb)    __rb_is_black((rb)->__rb_parent_color)
 
 static inline void rb_set_black(struct rb_node *rb)
 {
@@ -378,6 +383,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 {
 	struct rb_node *child = node->rb_right, *tmp = node->rb_left;
 	struct rb_node *parent, *rebalance;
+	unsigned long pc;
 
 	if (!tmp) {
 		/*
@@ -387,51 +393,75 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 		 * and node must be black due to 4). We adjust colors locally
 		 * so as to bypass __rb_erase_color() later on.
 		 */
-
-		parent = rb_parent(node);
+		pc = node->__rb_parent_color;
+		parent = __rb_parent(pc);
 		__rb_change_child(node, child, parent, root);
 		if (child) {
-			rb_set_parent_color(child, parent, RB_BLACK);
+			child->__rb_parent_color = pc;
 			rebalance = NULL;
-		} else {
-			rebalance = rb_is_black(node) ? parent : NULL;
-		}
+		} else
+			rebalance = __rb_is_black(pc) ? parent : NULL;
 	} else if (!child) {
 		/* Still case 1, but this time the child is node->rb_left */
-		parent = rb_parent(node);
+		tmp->__rb_parent_color = pc = node->__rb_parent_color;
+		parent = __rb_parent(pc);
 		__rb_change_child(node, tmp, parent, root);
-		rb_set_parent_color(tmp, parent, RB_BLACK);
 		rebalance = NULL;
 	} else {
-		struct rb_node *old = node, *left;
-
-		node = child;
-		while ((left = node->rb_left) != NULL)
-			node = left;
-
-		__rb_change_child(old, node, rb_parent(old), root);
-
-		child = node->rb_right;
-		parent = rb_parent(node);
-
-		if (parent == old) {
-			parent = node;
+		struct rb_node *successor = child, *child2;
+		tmp = child->rb_left;
+		if (!tmp) {
+			/*
+			 * Case 2: node's successor is its right child
+			 *
+			 *    (n)          (s)
+			 *    / \          / \
+			 *  (x) (s)  ->  (x) (c)
+			 *        \
+			 *        (c)
+			 */
+			parent = child;
+			child2 = child->rb_right;
 		} else {
-			parent->rb_left = child;
-
-			node->rb_right = old->rb_right;
-			rb_set_parent(old->rb_right, node);
+			/*
+			 * Case 3: node's successor is leftmost under
+			 * node's right child subtree
+			 *
+			 *    (n)          (s)
+			 *    / \          / \
+			 *  (x) (y)  ->  (x) (y)
+			 *      /            /
+			 *    (p)          (p)
+			 *    /            /
+			 *  (s)          (c)
+			 *    \
+			 *    (c)
+			 */
+			do {
+				parent = successor;
+				successor = tmp;
+				tmp = tmp->rb_left;
+			} while (tmp);
+			parent->rb_left = child2 = successor->rb_right;
+			successor->rb_right = child;
+			rb_set_parent(child, successor);
 		}
 
-		if (child) {
-			rb_set_parent_color(child, parent, RB_BLACK);
+		successor->rb_left = tmp = node->rb_left;
+		rb_set_parent(tmp, successor);
+
+		pc = node->__rb_parent_color;
+		tmp = __rb_parent(pc);
+		__rb_change_child(node, successor, tmp, root);
+		if (child2) {
+			successor->__rb_parent_color = pc;
+			rb_set_parent_color(child2, parent, RB_BLACK);
 			rebalance = NULL;
 		} else {
-			rebalance = rb_is_black(node) ? parent : NULL;
+			unsigned long pc2 = successor->__rb_parent_color;
+			successor->__rb_parent_color = pc;
+			rebalance = __rb_is_black(pc2) ? parent : NULL;
 		}
-		node->__rb_parent_color = old->__rb_parent_color;
-		node->rb_left = old->rb_left;
-		rb_set_parent(old->rb_left, node);
 	}
 
 	if (rebalance)
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 6/9] rbtree: augmented rbtree test
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
                   ` (4 preceding siblings ...)
  2012-08-20 22:05 ` [PATCH v3 5/9] rbtree: low level optimizations in rb_erase() Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 7/9] rbtree: faster augmented rbtree manipulation Michel Lespinasse
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

Small test to measure the performance of augmented rbtrees.

Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>

---
 lib/rbtree_test.c |  103 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index fd09465d82ca..66e41d4bfc39 100644
--- a/lib/rbtree_test.c
+++ b/lib/rbtree_test.c
@@ -10,6 +10,10 @@
 struct test_node {
 	struct rb_node rb;
 	u32 key;
+
+	/* following fields used for testing augmented rbtree functionality */
+	u32 val;
+	u32 augmented;
 };
 
 static struct rb_root root = RB_ROOT;
@@ -20,10 +24,11 @@ static struct rnd_state rnd;
 static void insert(struct test_node *node, struct rb_root *root)
 {
 	struct rb_node **new = &root->rb_node, *parent = NULL;
+	u32 key = node->key;
 
 	while (*new) {
 		parent = *new;
-		if (node->key < rb_entry(parent, struct test_node, rb)->key)
+		if (key < rb_entry(parent, struct test_node, rb)->key)
 			new = &parent->rb_left;
 		else
 			new = &parent->rb_right;
@@ -38,11 +43,62 @@ static inline void erase(struct test_node *node, struct rb_root *root)
 	rb_erase(&node->rb, root);
 }
 
+static inline u32 augment_recompute(struct test_node *node)
+{
+	u32 max = node->val, child_augmented;
+	if (node->rb.rb_left) {
+		child_augmented = rb_entry(node->rb.rb_left, struct test_node,
+					   rb)->augmented;
+		if (max < child_augmented)
+			max = child_augmented;
+	}
+	if (node->rb.rb_right) {
+		child_augmented = rb_entry(node->rb.rb_right, struct test_node,
+					   rb)->augmented;
+		if (max < child_augmented)
+			max = child_augmented;
+	}
+	return max;
+}
+
+static void augment_callback(struct rb_node *rb, void *unused)
+{
+	struct test_node *node = rb_entry(rb, struct test_node, rb);
+	node->augmented = augment_recompute(node);
+}
+
+static void insert_augmented(struct test_node *node, struct rb_root *root)
+{
+	struct rb_node **new = &root->rb_node, *parent = NULL;
+	u32 key = node->key;
+
+	while (*new) {
+		parent = *new;
+		if (key < rb_entry(parent, struct test_node, rb)->key)
+			new = &parent->rb_left;
+		else
+			new = &parent->rb_right;
+	}
+
+	rb_link_node(&node->rb, parent, new);
+	rb_insert_color(&node->rb, root);
+	rb_augment_insert(&node->rb, augment_callback, NULL);
+}
+
+static void erase_augmented(struct test_node *node, struct rb_root *root)
+{
+	struct rb_node *deepest = rb_augment_erase_begin(&node->rb);
+	rb_erase(&node->rb, root);
+	rb_augment_erase_end(deepest, augment_callback, NULL);
+}
+
 static void init(void)
 {
 	int i;
-	for (i = 0; i < NODES; i++)
+	for (i = 0; i < NODES; i++) {
 		nodes[i].key = prandom32(&rnd);
+		nodes[i].val = prandom32(&rnd);
+	}
 }
 
 static bool is_red(struct rb_node *rb)
@@ -81,6 +137,17 @@ static void check(int nr_nodes)
 	WARN_ON_ONCE(count != nr_nodes);
 }
 
+static void check_augmented(int nr_nodes)
+{
+	struct rb_node *rb;
+
+	check(nr_nodes);
+	for (rb = rb_first(&root); rb; rb = rb_next(rb)) {
+		struct test_node *node = rb_entry(rb, struct test_node, rb);
+		WARN_ON_ONCE(node->augmented != augment_recompute(node));
+	}
+}
+
 static int rbtree_test_init(void)
 {
 	int i, j;
@@ -119,6 +186,38 @@ static int rbtree_test_init(void)
 		check(0);
 	}
 
+	printk(KERN_ALERT "augmented rbtree testing");
+
+	init();
+
+	time1 = get_cycles();
+
+	for (i = 0; i < PERF_LOOPS; i++) {
+		for (j = 0; j < NODES; j++)
+			insert_augmented(nodes + j, &root);
+		for (j = 0; j < NODES; j++)
+			erase_augmented(nodes + j, &root);
+	}
+
+	time2 = get_cycles();
+	time = time2 - time1;
+
+	time = div_u64(time, PERF_LOOPS);
+	printk(" -> %llu cycles\n", (unsigned long long)time);
+
+	for (i = 0; i < CHECK_LOOPS; i++) {
+		init();
+		for (j = 0; j < NODES; j++) {
+			check_augmented(j);
+			insert_augmented(nodes + j, &root);
+		}
+		for (j = 0; j < NODES; j++) {
+			check_augmented(NODES - j);
+			erase_augmented(nodes + j, &root);
+		}
+		check_augmented(0);
+	}
+
 	return -EAGAIN; /* Fail will directly unload the module */
 }
 
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 7/9] rbtree: faster augmented rbtree manipulation
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
                   ` (5 preceding siblings ...)
  2012-08-20 22:05 ` [PATCH v3 6/9] rbtree: augmented rbtree test Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 8/9] rbtree: remove prior augmented rbtree implementation Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 9/9] rbtree: add RB_DECLARE_CALLBACKS() macro Michel Lespinasse
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

Introduce new augmented rbtree APIs that allow minimal recalculation of
augmented node information.

A new callback is added to the rbtree insertion and erase rebalancing
functions, to be called on each tree rotations. Such rotations preserve
the subtree's root augmented value, but require recalculation of the one
child that was previously located at the subtree root.

In the insertion case, the handcoded search phase must be updated to
maintain the augmented information on insertion, and then the rbtree
coloring/rebalancing algorithms keep it up to date.

In the erase case, things are more complicated since it is library
code that manipulates the rbtree in order to remove internal nodes.
This requires a couple additional callbacks to copy a subtree's
augmented value when a new root is stitched in, and to recompute
augmented values down the ancestry path when a node is removed from
the tree.

In order to preserve maximum speed for the non-augmented case,
we provide two versions of each tree manipulation function.
rb_insert_augmented() is the augmented equivalent of rb_insert_color(),
and rb_erase_augmented() is the augmented equivalent of rb_erase().

Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>

---
 Documentation/rbtree.txt |  190 ++++++++++++++++++++++++++++++++++++++--------
 include/linux/rbtree.h   |   19 +++++
 lib/rbtree.c             |   83 ++++++++++++++++++--
 lib/rbtree_test.c        |   58 +++++++++++----
 4 files changed, 296 insertions(+), 54 deletions(-)

diff --git a/Documentation/rbtree.txt b/Documentation/rbtree.txt
index 8d32d85a5234..0a0b6dce3e08 100644
--- a/Documentation/rbtree.txt
+++ b/Documentation/rbtree.txt
@@ -193,24 +193,42 @@ Example:
 Support for Augmented rbtrees
 -----------------------------
 
-Augmented rbtree is an rbtree with "some" additional data stored in each node.
-This data can be used to augment some new functionality to rbtree.
-Augmented rbtree is an optional feature built on top of basic rbtree
-infrastructure. An rbtree user who wants this feature will have to call the
-augmentation functions with the user provided augmentation callback
-when inserting and erasing nodes.
+Augmented rbtree is an rbtree with "some" additional data stored in
+each node, where the additional data for node N must be a function of
+the contents of all nodes in the subtree rooted at N. This data can
+be used to augment some new functionality to rbtree. Augmented rbtree
+is an optional feature built on top of basic rbtree infrastructure.
+An rbtree user who wants this feature will have to call the augmentation
+functions with the user provided augmentation callback when inserting
+and erasing nodes.
 
-On insertion, the user must call rb_augment_insert() once the new node is in
-place. This will cause the augmentation function callback to be called for
-each node between the new node and the root which has been affected by the
-insertion.
+On insertion, the user must update the augmented information on the path
+leading to the inserted node, then call rb_link_node() as usual and
+rb_augment_inserted() instead of the usual rb_insert_color() call.
+If rb_augment_inserted() rebalances the rbtree, it will callback into
+a user provided function to update the augmented information on the
+affected subtrees.
 
-When erasing a node, the user must call rb_augment_erase_begin() first to
-retrieve the deepest node on the rebalance path. Then, after erasing the
-original node, the user must call rb_augment_erase_end() with the deepest
-node found earlier. This will cause the augmentation function to be called
-for each affected node between the deepest node and the root.
+When erasing a node, the user must call rb_erase_augmented() instead of
+rb_erase(). rb_erase_augmented() calls back into user provided functions
+to updated the augmented information on affected subtrees.
 
+In both cases, the callbacks are provided through struct rb_augment_callbacks.
+3 callbacks must be defined:
+
+- A propagation callback, which updates the augmented value for a given
+  node and its ancestors, up to a given stop point (or NULL to update
+  all the way to the root).
+
+- A copy callback, which copies the augmented value for a given subtree
+  to a newly assigned subtree root.
+
+- A tree rotation callback, which copies the augmented value for a given
+  subtree to a newly assigned subtree root AND recomputes the augmented
+  information for the former subtree root.
+
+
+Sample usage:
 
 Interval tree is an example of augmented rb tree. Reference -
 "Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.
@@ -230,26 +248,132 @@ and its immediate children. And this will be used in O(log n) lookup
 for lowest match (lowest start address among all possible matches)
 with something like:
 
-find_lowest_match(lo, hi, node)
+struct interval_tree_node *
+interval_tree_first_match(struct rb_root *root,
+			  unsigned long start, unsigned long last)
 {
-	lowest_match = NULL;
-	while (node) {
-		if (max_hi(node->left) > lo) {
-			// Lowest overlap if any must be on left side
-			node = node->left;
-		} else if (overlap(lo, hi, node)) {
-			lowest_match = node;
-			break;
-		} else if (lo > node->lo) {
-			// Lowest overlap if any must be on right side
-			node = node->right;
-		} else {
-			break;
+	struct interval_tree_node *node;
+
+	if (!root->rb_node)
+		return NULL;
+	node = rb_entry(root->rb_node, struct interval_tree_node, rb);
+
+	while (true) {
+		if (node->rb.rb_left) {
+			struct interval_tree_node *left =
+				rb_entry(node->rb.rb_left,
+					 struct interval_tree_node, rb);
+			if (left->__subtree_last >= start) {
+				/*
+				 * Some nodes in left subtree satisfy Cond2.
+				 * Iterate to find the leftmost such node N.
+				 * If it also satisfies Cond1, that's the match
+				 * we are looking for. Otherwise, there is no
+				 * matching interval as nodes to the right of N
+				 * can't satisfy Cond1 either.
+				 */
+				node = left;
+				continue;
+			}
 		}
+		if (node->start <= last) {		/* Cond1 */
+			if (node->last >= start)	/* Cond2 */
+				return node;	/* node is leftmost match */
+			if (node->rb.rb_right) {
+				node = rb_entry(node->rb.rb_right,
+					struct interval_tree_node, rb);
+				if (node->__subtree_last >= start)
+					continue;
+			}
+		}
+		return NULL;	/* No match */
+	}
+}
+
+Insertion/removal are defined using the following augmented callbacks:
+
+static inline unsigned long
+compute_subtree_last(struct interval_tree_node *node)
+{
+	unsigned long max = node->last, subtree_last;
+	if (node->rb.rb_left) {
+		subtree_last = rb_entry(node->rb.rb_left,
+			struct interval_tree_node, rb)->__subtree_last;
+		if (max < subtree_last)
+			max = subtree_last;
+	}
+	if (node->rb.rb_right) {
+		subtree_last = rb_entry(node->rb.rb_right,
+			struct interval_tree_node, rb)->__subtree_last;
+		if (max < subtree_last)
+			max = subtree_last;
+	}
+	return max;
+}
+
+static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
+{
+	while (rb != stop) {
+		struct interval_tree_node *node =
+			rb_entry(rb, struct interval_tree_node, rb);
+		unsigned long subtree_last = compute_subtree_last(node);
+		if (node->__subtree_last == subtree_last)
+			break;
+		node->__subtree_last = subtree_last;
+		rb = rb_parent(&node->rb);
+	}
+}
+
+static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
+{
+	struct interval_tree_node *old =
+		rb_entry(rb_old, struct interval_tree_node, rb);
+	struct interval_tree_node *new =
+		rb_entry(rb_new, struct interval_tree_node, rb);
+
+	new->__subtree_last = old->__subtree_last;
+}
+
+static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
+{
+	struct interval_tree_node *old =
+		rb_entry(rb_old, struct interval_tree_node, rb);
+	struct interval_tree_node *new =
+		rb_entry(rb_new, struct interval_tree_node, rb);
+
+	new->__subtree_last = old->__subtree_last;
+	old->__subtree_last = compute_subtree_last(old);
+}
+
+static const struct rb_augment_callbacks augment_callbacks = {
+	augment_propagate, augment_copy, augment_rotate
+};
+
+void interval_tree_insert(struct interval_tree_node *node,
+			  struct rb_root *root)
+{
+	struct rb_node **link = &root->rb_node, *rb_parent = NULL;
+	unsigned long start = node->start, last = node->last;
+	struct interval_tree_node *parent;
+
+	while (*link) {
+		rb_parent = *link;
+		parent = rb_entry(rb_parent, struct interval_tree_node, rb);
+		if (parent->__subtree_last < last)
+			parent->__subtree_last = last;
+		if (start < parent->start)
+			link = &parent->rb.rb_left;
+		else
+			link = &parent->rb.rb_right;
 	}
-	return lowest_match;
+
+	node->__subtree_last = last;
+	rb_link_node(&node->rb, rb_parent, link);
+	rb_insert_augmented(&node->rb, root, &augment_callbacks);
 }
 
-Finding exact match will be to first find lowest match and then to follow
-successor nodes looking for exact match, until the start of a node is beyond
-the hi value we are looking for.
+void interval_tree_remove(struct interval_tree_node *node,
+			  struct rb_root *root)
+{
+	rb_erase_augmented(&node->rb, root, &augment_callbacks);
+}
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index bf836a2c6a32..c902eb9d6506 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -61,6 +61,25 @@ struct rb_root {
 extern void rb_insert_color(struct rb_node *, struct rb_root *);
 extern void rb_erase(struct rb_node *, struct rb_root *);
 
+
+struct rb_augment_callbacks {
+	void (*propagate)(struct rb_node *node, struct rb_node *stop);
+	void (*copy)(struct rb_node *old, struct rb_node *new);
+	void (*rotate)(struct rb_node *old, struct rb_node *new);
+};
+
+extern void __rb_insert_augmented(struct rb_node *node, struct rb_root *root,
+	void (*augment_rotate)(struct rb_node *old, struct rb_node *new));
+extern void rb_erase_augmented(struct rb_node *node, struct rb_root *root,
+			       const struct rb_augment_callbacks *augment);
+static inline void
+rb_insert_augmented(struct rb_node *node, struct rb_root *root,
+		    const struct rb_augment_callbacks *augment)
+{
+	__rb_insert_augmented(node, root, augment->rotate);
+}
+
+
 typedef void (*rb_augment_f)(struct rb_node *node, void *data);
 
 extern void rb_augment_insert(struct rb_node *node,
diff --git a/lib/rbtree.c b/lib/rbtree.c
index 938061ecbe61..a37ee7954b8f 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -105,7 +105,9 @@ __rb_rotate_set_parents(struct rb_node *old, struct rb_node *new,
 	__rb_change_child(old, new, parent, root);
 }
 
-void rb_insert_color(struct rb_node *node, struct rb_root *root)
+static __always_inline void
+__rb_insert(struct rb_node *node, struct rb_root *root,
+	    void (*augment_rotate)(struct rb_node *old, struct rb_node *new))
 {
 	struct rb_node *parent = rb_red_parent(node), *gparent, *tmp;
 
@@ -169,6 +171,7 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 					rb_set_parent_color(tmp, parent,
 							    RB_BLACK);
 				rb_set_parent_color(parent, node, RB_RED);
+				augment_rotate(parent, node);
 				parent = node;
 				tmp = node->rb_right;
 			}
@@ -187,6 +190,7 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 			if (tmp)
 				rb_set_parent_color(tmp, gparent, RB_BLACK);
 			__rb_rotate_set_parents(gparent, parent, root, RB_RED);
+			augment_rotate(gparent, parent);
 			break;
 		} else {
 			tmp = gparent->rb_left;
@@ -209,6 +213,7 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 					rb_set_parent_color(tmp, parent,
 							    RB_BLACK);
 				rb_set_parent_color(parent, node, RB_RED);
+				augment_rotate(parent, node);
 				parent = node;
 				tmp = node->rb_left;
 			}
@@ -219,13 +224,15 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 			if (tmp)
 				rb_set_parent_color(tmp, gparent, RB_BLACK);
 			__rb_rotate_set_parents(gparent, parent, root, RB_RED);
+			augment_rotate(gparent, parent);
 			break;
 		}
 	}
 }
-EXPORT_SYMBOL(rb_insert_color);
 
-static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
+static __always_inline void
+__rb_erase_color(struct rb_node *parent, struct rb_root *root,
+		 const struct rb_augment_callbacks *augment)
 {
 	struct rb_node *node = NULL, *sibling, *tmp1, *tmp2;
 
@@ -254,6 +261,7 @@ static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 				rb_set_parent_color(tmp1, parent, RB_BLACK);
 				__rb_rotate_set_parents(parent, sibling, root,
 							RB_RED);
+				augment->rotate(parent, sibling);
 				sibling = tmp1;
 			}
 			tmp1 = sibling->rb_right;
@@ -305,6 +313,7 @@ static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 				if (tmp1)
 					rb_set_parent_color(tmp1, sibling,
 							    RB_BLACK);
+				augment->rotate(sibling, tmp2);
 				tmp1 = sibling;
 				sibling = tmp2;
 			}
@@ -327,6 +336,7 @@ static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 				rb_set_parent(tmp2, parent);
 			__rb_rotate_set_parents(parent, sibling, root,
 						RB_BLACK);
+			augment->rotate(parent, sibling);
 			break;
 		} else {
 			sibling = parent->rb_left;
@@ -337,6 +347,7 @@ static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 				rb_set_parent_color(tmp1, parent, RB_BLACK);
 				__rb_rotate_set_parents(parent, sibling, root,
 							RB_RED);
+				augment->rotate(parent, sibling);
 				sibling = tmp1;
 			}
 			tmp1 = sibling->rb_left;
@@ -363,6 +374,7 @@ static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 				if (tmp1)
 					rb_set_parent_color(tmp1, sibling,
 							    RB_BLACK);
+				augment->rotate(sibling, tmp2);
 				tmp1 = sibling;
 				sibling = tmp2;
 			}
@@ -374,12 +386,15 @@ static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 				rb_set_parent(tmp2, parent);
 			__rb_rotate_set_parents(parent, sibling, root,
 						RB_BLACK);
+			augment->rotate(parent, sibling);
 			break;
 		}
 	}
 }
 
-void rb_erase(struct rb_node *node, struct rb_root *root)
+static __always_inline void
+__rb_erase(struct rb_node *node, struct rb_root *root,
+	   const struct rb_augment_callbacks *augment)
 {
 	struct rb_node *child = node->rb_right, *tmp = node->rb_left;
 	struct rb_node *parent, *rebalance;
@@ -401,12 +416,14 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 			rebalance = NULL;
 		} else
 			rebalance = __rb_is_black(pc) ? parent : NULL;
+		tmp = parent;
 	} else if (!child) {
 		/* Still case 1, but this time the child is node->rb_left */
 		tmp->__rb_parent_color = pc = node->__rb_parent_color;
 		parent = __rb_parent(pc);
 		__rb_change_child(node, tmp, parent, root);
 		rebalance = NULL;
+		tmp = parent;
 	} else {
 		struct rb_node *successor = child, *child2;
 		tmp = child->rb_left;
@@ -420,8 +437,9 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 			 *        \
 			 *        (c)
 			 */
-			parent = child;
-			child2 = child->rb_right;
+			parent = successor;
+			child2 = successor->rb_right;
+			augment->copy(node, successor);
 		} else {
 			/*
 			 * Case 3: node's successor is leftmost under
@@ -445,6 +463,8 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 			parent->rb_left = child2 = successor->rb_right;
 			successor->rb_right = child;
 			rb_set_parent(child, successor);
+			augment->copy(node, successor);
+			augment->propagate(parent, successor);
 		}
 
 		successor->rb_left = tmp = node->rb_left;
@@ -462,13 +482,62 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 			successor->__rb_parent_color = pc;
 			rebalance = __rb_is_black(pc2) ? parent : NULL;
 		}
+		tmp = successor;
 	}
 
+	augment->propagate(tmp, NULL);
 	if (rebalance)
-		__rb_erase_color(rebalance, root);
+		__rb_erase_color(rebalance, root, augment);
+}
+
+/*
+ * Non-augmented rbtree manipulation functions.
+ *
+ * We use dummy augmented callbacks here, and have the compiler optimize them
+ * out of the rb_insert_color() and rb_erase() function definitions.
+ */
+
+static inline void dummy_propagate(struct rb_node *node, struct rb_node *stop) {}
+static inline void dummy_copy(struct rb_node *old, struct rb_node *new) {}
+static inline void dummy_rotate(struct rb_node *old, struct rb_node *new) {}
+
+static const struct rb_augment_callbacks dummy_callbacks = {
+	dummy_propagate, dummy_copy, dummy_rotate
+};
+
+void rb_insert_color(struct rb_node *node, struct rb_root *root)
+{
+	__rb_insert(node, root, dummy_rotate);
+}
+EXPORT_SYMBOL(rb_insert_color);
+
+void rb_erase(struct rb_node *node, struct rb_root *root)
+{
+	__rb_erase(node, root, &dummy_callbacks);
 }
 EXPORT_SYMBOL(rb_erase);
 
+/*
+ * Augmented rbtree manipulation functions.
+ *
+ * This instantiates the same __always_inline functions as in the non-augmented
+ * case, but this time with user-defined callbacks.
+ */
+
+void __rb_insert_augmented(struct rb_node *node, struct rb_root *root,
+	void (*augment_rotate)(struct rb_node *old, struct rb_node *new))
+{
+	__rb_insert(node, root, augment_rotate);
+}
+EXPORT_SYMBOL(__rb_insert_augmented);
+
+void rb_erase_augmented(struct rb_node *node, struct rb_root *root,
+			const struct rb_augment_callbacks *augment)
+{
+	__rb_erase(node, root, augment);
+}
+EXPORT_SYMBOL(rb_erase_augmented);
+
 static void rb_augment_path(struct rb_node *node, rb_augment_f func, void *data)
 {
 	struct rb_node *parent;
diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index 66e41d4bfc39..e28345df09bf 100644
--- a/lib/rbtree_test.c
+++ b/lib/rbtree_test.c
@@ -61,35 +61,65 @@ static inline u32 augment_recompute(struct test_node *node)
 	return max;
 }
 
-static void augment_callback(struct rb_node *rb, void *unused)
+static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
 {
-	struct test_node *node = rb_entry(rb, struct test_node, rb);
-	node->augmented = augment_recompute(node);
+	while (rb != stop) {
+		struct test_node *node = rb_entry(rb, struct test_node, rb);
+		u32 augmented = augment_recompute(node);
+		if (node->augmented == augmented)
+			break;
+		node->augmented = augmented;
+		rb = rb_parent(&node->rb);
+	}
+}
+
+static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
+{
+	struct test_node *old = rb_entry(rb_old, struct test_node, rb);
+	struct test_node *new = rb_entry(rb_new, struct test_node, rb);
+	new->augmented = old->augmented;
 }
 
+static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
+{
+	struct test_node *old = rb_entry(rb_old, struct test_node, rb);
+	struct test_node *new = rb_entry(rb_new, struct test_node, rb);
+
+	/* Rotation doesn't change subtree's augmented value */
+	new->augmented = old->augmented;
+	old->augmented = augment_recompute(old);
+}
+
+static const struct rb_augment_callbacks augment_callbacks = {
+	augment_propagate, augment_copy, augment_rotate
+};
+
 static void insert_augmented(struct test_node *node, struct rb_root *root)
 {
-	struct rb_node **new = &root->rb_node, *parent = NULL;
+	struct rb_node **new = &root->rb_node, *rb_parent = NULL;
 	u32 key = node->key;
+	u32 val = node->val;
+	struct test_node *parent;
 
 	while (*new) {
-		parent = *new;
-		if (key < rb_entry(parent, struct test_node, rb)->key)
-			new = &parent->rb_left;
+		rb_parent = *new;
+		parent = rb_entry(rb_parent, struct test_node, rb);
+		if (parent->augmented < val)
+			parent->augmented = val;
+		if (key < parent->key)
+			new = &parent->rb.rb_left;
 		else
-			new = &parent->rb_right;
+			new = &parent->rb.rb_right;
 	}
 
-	rb_link_node(&node->rb, parent, new);
-	rb_insert_color(&node->rb, root);
-	rb_augment_insert(&node->rb, augment_callback, NULL);
+	node->augmented = val;
+	rb_link_node(&node->rb, rb_parent, new);
+	rb_insert_augmented(&node->rb, root, &augment_callbacks);
 }
 
 static void erase_augmented(struct test_node *node, struct rb_root *root)
 {
-	struct rb_node *deepest = rb_augment_erase_begin(&node->rb);
-	rb_erase(&node->rb, root);
-	rb_augment_erase_end(deepest, augment_callback, NULL);
+	rb_erase_augmented(&node->rb, root, &augment_callbacks);
 }
 
 static void init(void)
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 8/9] rbtree: remove prior augmented rbtree implementation
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
                   ` (6 preceding siblings ...)
  2012-08-20 22:05 ` [PATCH v3 7/9] rbtree: faster augmented rbtree manipulation Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  2012-08-20 22:05 ` [PATCH v3 9/9] rbtree: add RB_DECLARE_CALLBACKS() macro Michel Lespinasse
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

convert arch/x86/mm/pat_rbtree.c to the proposed augmented rbtree api
and remove the old augmented rbtree implementation.

Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Rik van Riel <riel@redhat.com>

---
 arch/x86/mm/pat_rbtree.c |   65 +++++++++++++++++++++++++++++------------
 include/linux/rbtree.h   |    8 -----
 lib/rbtree.c             |   71 ----------------------------------------------
 3 files changed, 46 insertions(+), 98 deletions(-)

diff --git a/arch/x86/mm/pat_rbtree.c b/arch/x86/mm/pat_rbtree.c
index 8acaddd0fb21..7e1515bd4770 100644
--- a/arch/x86/mm/pat_rbtree.c
+++ b/arch/x86/mm/pat_rbtree.c
@@ -54,29 +54,57 @@ static u64 get_subtree_max_end(struct rb_node *node)
 	return ret;
 }
 
-/* Update 'subtree_max_end' for a node, based on node and its children */
-static void memtype_rb_augment_cb(struct rb_node *node, void *__unused)
+static u64 compute_subtree_max_end(struct memtype *data)
 {
-	struct memtype *data;
-	u64 max_end, child_max_end;
-
-	if (!node)
-		return;
-
-	data = container_of(node, struct memtype, rb);
-	max_end = data->end;
+	u64 max_end = data->end, child_max_end;
 
-	child_max_end = get_subtree_max_end(node->rb_right);
+	child_max_end = get_subtree_max_end(data->rb.rb_right);
 	if (child_max_end > max_end)
 		max_end = child_max_end;
 
-	child_max_end = get_subtree_max_end(node->rb_left);
+	child_max_end = get_subtree_max_end(data->rb.rb_left);
 	if (child_max_end > max_end)
 		max_end = child_max_end;
 
-	data->subtree_max_end = max_end;
+	return max_end;
+}
+
+/* Update 'subtree_max_end' for node and its parents */
+static void memtype_rb_propagate_cb(struct rb_node *node, struct rb_node *stop)
+{
+	while (node != stop) {
+		struct memtype *data = container_of(node, struct memtype, rb);
+		u64 subtree_max_end = compute_subtree_max_end(data);
+		if (data->subtree_max_end == subtree_max_end)
+			break;
+		data->subtree_max_end = subtree_max_end;
+		node = rb_parent(&data->rb);
+	}
+}
+
+static void memtype_rb_copy_cb(struct rb_node *old, struct rb_node *new)
+{
+	struct memtype *old_data = container_of(old, struct memtype, rb);
+	struct memtype *new_data = container_of(new, struct memtype, rb);
+
+	new_data->subtree_max_end = old_data->subtree_max_end;
 }
 
+/* Update 'subtree_max_end' after tree rotation. old and new are the
+ * former and current subtree roots */
+static void memtype_rb_rotate_cb(struct rb_node *old, struct rb_node *new)
+{
+	struct memtype *old_data = container_of(old, struct memtype, rb);
+	struct memtype *new_data = container_of(new, struct memtype, rb);
+
+	new_data->subtree_max_end = old_data->subtree_max_end;
+	old_data->subtree_max_end = compute_subtree_max_end(old_data);
+}
+
+static const struct rb_augment_callbacks memtype_rb_augment_cb = {
+	memtype_rb_propagate_cb, memtype_rb_copy_cb, memtype_rb_rotate_cb
+};
+
 /* Find the first (lowest start addr) overlapping range from rb tree */
 static struct memtype *memtype_rb_lowest_match(struct rb_root *root,
 				u64 start, u64 end)
@@ -179,15 +207,17 @@ static void memtype_rb_insert(struct rb_root *root, struct memtype *newdata)
 		struct memtype *data = container_of(*node, struct memtype, rb);
 
 		parent = *node;
+		if (data->subtree_max_end < newdata->end)
+			data->subtree_max_end = newdata->end;
 		if (newdata->start <= data->start)
 			node = &((*node)->rb_left);
 		else if (newdata->start > data->start)
 			node = &((*node)->rb_right);
 	}
 
+	newdata->subtree_max_end = newdata->end;
 	rb_link_node(&newdata->rb, parent, node);
-	rb_insert_color(&newdata->rb, root);
-	rb_augment_insert(&newdata->rb, memtype_rb_augment_cb, NULL);
+	rb_insert_augmented(&newdata->rb, root, &memtype_rb_augment_cb);
 }
 
 int rbt_memtype_check_insert(struct memtype *new, unsigned long *ret_type)
@@ -209,16 +239,13 @@ int rbt_memtype_check_insert(struct memtype *new, unsigned long *ret_type)
 
 struct memtype *rbt_memtype_erase(u64 start, u64 end)
 {
-	struct rb_node *deepest;
 	struct memtype *data;
 
 	data = memtype_rb_exact_match(&memtype_rbroot, start, end);
 	if (!data)
 		goto out;
 
-	deepest = rb_augment_erase_begin(&data->rb);
-	rb_erase(&data->rb, &memtype_rbroot);
-	rb_augment_erase_end(deepest, memtype_rb_augment_cb, NULL);
+	rb_erase_augmented(&data->rb, &memtype_rbroot, &memtype_rb_augment_cb);
 out:
 	return data;
 }
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index c902eb9d6506..4ace31b33380 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -80,14 +80,6 @@ rb_insert_augmented(struct rb_node *node, struct rb_root *root,
 }
 
 
-typedef void (*rb_augment_f)(struct rb_node *node, void *data);
-
-extern void rb_augment_insert(struct rb_node *node,
-			      rb_augment_f func, void *data);
-extern struct rb_node *rb_augment_erase_begin(struct rb_node *node);
-extern void rb_augment_erase_end(struct rb_node *node,
-				 rb_augment_f func, void *data);
-
 /* Find logical next and previous nodes in a tree */
 extern struct rb_node *rb_next(const struct rb_node *);
 extern struct rb_node *rb_prev(const struct rb_node *);
diff --git a/lib/rbtree.c b/lib/rbtree.c
index a37ee7954b8f..c0088ca345f9 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -538,77 +538,6 @@ void rb_erase_augmented(struct rb_node *node, struct rb_root *root,
 }
 EXPORT_SYMBOL(rb_erase_augmented);
 
-static void rb_augment_path(struct rb_node *node, rb_augment_f func, void *data)
-{
-	struct rb_node *parent;
-
-up:
-	func(node, data);
-	parent = rb_parent(node);
-	if (!parent)
-		return;
-
-	if (node == parent->rb_left && parent->rb_right)
-		func(parent->rb_right, data);
-	else if (parent->rb_left)
-		func(parent->rb_left, data);
-
-	node = parent;
-	goto up;
-}
-
-/*
- * after inserting @node into the tree, update the tree to account for
- * both the new entry and any damage done by rebalance
- */
-void rb_augment_insert(struct rb_node *node, rb_augment_f func, void *data)
-{
-	if (node->rb_left)
-		node = node->rb_left;
-	else if (node->rb_right)
-		node = node->rb_right;
-
-	rb_augment_path(node, func, data);
-}
-EXPORT_SYMBOL(rb_augment_insert);
-
-/*
- * before removing the node, find the deepest node on the rebalance path
- * that will still be there after @node gets removed
- */
-struct rb_node *rb_augment_erase_begin(struct rb_node *node)
-{
-	struct rb_node *deepest;
-
-	if (!node->rb_right && !node->rb_left)
-		deepest = rb_parent(node);
-	else if (!node->rb_right)
-		deepest = node->rb_left;
-	else if (!node->rb_left)
-		deepest = node->rb_right;
-	else {
-		deepest = rb_next(node);
-		if (deepest->rb_right)
-			deepest = deepest->rb_right;
-		else if (rb_parent(deepest) != node)
-			deepest = rb_parent(deepest);
-	}
-
-	return deepest;
-}
-EXPORT_SYMBOL(rb_augment_erase_begin);
-
-/*
- * after removal, update the tree to account for the removed entry
- * and any rebalance damage.
- */
-void rb_augment_erase_end(struct rb_node *node, rb_augment_f func, void *data)
-{
-	if (node)
-		rb_augment_path(node, func, data);
-}
-EXPORT_SYMBOL(rb_augment_erase_end);
-
 /*
  * This function returns the first node (in sort order) of the tree.
  */
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 9/9] rbtree: add RB_DECLARE_CALLBACKS() macro
  2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
                   ` (7 preceding siblings ...)
  2012-08-20 22:05 ` [PATCH v3 8/9] rbtree: remove prior augmented rbtree implementation Michel Lespinasse
@ 2012-08-20 22:05 ` Michel Lespinasse
  8 siblings, 0 replies; 16+ messages in thread
From: Michel Lespinasse @ 2012-08-20 22:05 UTC (permalink / raw)
  To: riel, peterz, daniel.santos, aarcange, dwmw2, akpm
  Cc: linux-mm, linux-kernel, torvalds

As proposed by Peter Zijlstra, this makes it easier to define the augmented
rbtree callbacks.

Signed-off-by: Michel Lespinasse <walken@google.com>

---
 arch/x86/mm/pat_rbtree.c |   37 ++-----------------------------------
 include/linux/rbtree.h   |   30 ++++++++++++++++++++++++++++++
 lib/rbtree_test.c        |   34 ++--------------------------------
 3 files changed, 34 insertions(+), 67 deletions(-)

diff --git a/arch/x86/mm/pat_rbtree.c b/arch/x86/mm/pat_rbtree.c
index 7e1515bd4770..4d116959075d 100644
--- a/arch/x86/mm/pat_rbtree.c
+++ b/arch/x86/mm/pat_rbtree.c
@@ -69,41 +69,8 @@ static u64 compute_subtree_max_end(struct memtype *data)
 	return max_end;
 }
 
-/* Update 'subtree_max_end' for node and its parents */
-static void memtype_rb_propagate_cb(struct rb_node *node, struct rb_node *stop)
-{
-	while (node != stop) {
-		struct memtype *data = container_of(node, struct memtype, rb);
-		u64 subtree_max_end = compute_subtree_max_end(data);
-		if (data->subtree_max_end == subtree_max_end)
-			break;
-		data->subtree_max_end = subtree_max_end;
-		node = rb_parent(&data->rb);
-	}
-}
-
-static void memtype_rb_copy_cb(struct rb_node *old, struct rb_node *new)
-{
-	struct memtype *old_data = container_of(old, struct memtype, rb);
-	struct memtype *new_data = container_of(new, struct memtype, rb);
-
-	new_data->subtree_max_end = old_data->subtree_max_end;
-}
-
-/* Update 'subtree_max_end' after tree rotation. old and new are the
- * former and current subtree roots */
-static void memtype_rb_rotate_cb(struct rb_node *old, struct rb_node *new)
-{
-	struct memtype *old_data = container_of(old, struct memtype, rb);
-	struct memtype *new_data = container_of(new, struct memtype, rb);
-
-	new_data->subtree_max_end = old_data->subtree_max_end;
-	old_data->subtree_max_end = compute_subtree_max_end(old_data);
-}
-
-static const struct rb_augment_callbacks memtype_rb_augment_cb = {
-	memtype_rb_propagate_cb, memtype_rb_copy_cb, memtype_rb_rotate_cb
-};
+RB_DECLARE_CALLBACKS(static, memtype_rb_augment_cb, struct memtype, rb,
+		     u64, subtree_max_end, compute_subtree_max_end)
 
 /* Find the first (lowest start addr) overlapping range from rb tree */
 static struct memtype *memtype_rb_lowest_match(struct rb_root *root,
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index 4ace31b33380..8d1e83b1c87b 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -79,6 +79,36 @@ rb_insert_augmented(struct rb_node *node, struct rb_root *root,
 	__rb_insert_augmented(node, root, augment->rotate);
 }
 
+#define RB_DECLARE_CALLBACKS(rbstatic, rbname, rbstruct, rbfield,	      \
+			     rbtype, rbaugmented, rbcompute)		      \
+static void rbname ## _propagate(struct rb_node *rb, struct rb_node *stop)    \
+{									      \
+	while (rb != stop) {						      \
+		rbstruct *node = rb_entry(rb, rbstruct, rbfield);	      \
+		rbtype augmented = rbcompute(node);			      \
+		if (node->rbaugmented == augmented)			      \
+			break;						      \
+		node->rbaugmented = augmented;				      \
+		rb = rb_parent(&node->rbfield);				      \
+	}								      \
+}									      \
+static void rbname ## _copy(struct rb_node *rb_old, struct rb_node *rb_new)   \
+{									      \
+	rbstruct *old = rb_entry(rb_old, rbstruct, rbfield);		      \
+	rbstruct *new = rb_entry(rb_new, rbstruct, rbfield);		      \
+	new->rbaugmented = old->rbaugmented;				      \
+}									      \
+static void rbname ## _rotate(struct rb_node *rb_old, struct rb_node *rb_new) \
+{									      \
+	rbstruct *old = rb_entry(rb_old, rbstruct, rbfield);		      \
+	rbstruct *new = rb_entry(rb_new, rbstruct, rbfield);		      \
+	new->rbaugmented = old->rbaugmented;				      \
+	old->rbaugmented = rbcompute(old);				      \
+}									      \
+rbstatic const struct rb_augment_callbacks rbname = {			      \
+	rbname ## _propagate, rbname ## _copy, rbname ## _rotate	      \
+};
+
 
 /* Find logical next and previous nodes in a tree */
 extern struct rb_node *rb_next(const struct rb_node *);
diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index e28345df09bf..b20e99969b0f 100644
--- a/lib/rbtree_test.c
+++ b/lib/rbtree_test.c
@@ -61,38 +61,8 @@ static inline u32 augment_recompute(struct test_node *node)
 	return max;
 }
 
-static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
-{
-	while (rb != stop) {
-		struct test_node *node = rb_entry(rb, struct test_node, rb);
-		u32 augmented = augment_recompute(node);
-		if (node->augmented == augmented)
-			break;
-		node->augmented = augmented;
-		rb = rb_parent(&node->rb);
-	}
-}
-
-static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
-{
-	struct test_node *old = rb_entry(rb_old, struct test_node, rb);
-	struct test_node *new = rb_entry(rb_new, struct test_node, rb);
-	new->augmented = old->augmented;
-}
-
-static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
-{
-	struct test_node *old = rb_entry(rb_old, struct test_node, rb);
-	struct test_node *new = rb_entry(rb_new, struct test_node, rb);
-
-	/* Rotation doesn't change subtree's augmented value */
-	new->augmented = old->augmented;
-	old->augmented = augment_recompute(old);
-}
-
-static const struct rb_augment_callbacks augment_callbacks = {
-	augment_propagate, augment_copy, augment_rotate
-};
+RB_DECLARE_CALLBACKS(static, augment_callbacks, struct test_node, rb,
+		     u32, augmented, augment_recompute)
 
 static void insert_augmented(struct test_node *node, struct rb_root *root)
 {
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/9] rbtree: add __rb_change_child() helper function
  2012-08-20 22:05 ` [PATCH v3 2/9] rbtree: add __rb_change_child() helper function Michel Lespinasse
@ 2012-08-20 22:17   ` Andrew Morton
  2012-08-23 12:05     ` Jan Engelhardt
  2012-09-26 22:34     ` Daniel Santos
  0 siblings, 2 replies; 16+ messages in thread
From: Andrew Morton @ 2012-08-20 22:17 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: riel, peterz, daniel.santos, aarcange, dwmw2, linux-mm,
	linux-kernel, torvalds

On Mon, 20 Aug 2012 15:05:24 -0700
Michel Lespinasse <walken@google.com> wrote:

> Add __rb_change_child() as an inline helper function to replace code that
> would otherwise be duplicated 4 times in the source.
> 
> No changes to binary size or speed.
> 
> ...
>
> --- a/lib/rbtree.c
> +++ b/lib/rbtree.c
> @@ -66,6 +66,19 @@ static inline struct rb_node *rb_red_parent(struct rb_node *red)
>  	return (struct rb_node *)red->__rb_parent_color;
>  }
>  
> +static inline void
> +__rb_change_child(struct rb_node *old, struct rb_node *new,
> +		  struct rb_node *parent, struct rb_root *root)
> +{
> +	if (parent) {
> +		if (parent->rb_left == old)
> +			parent->rb_left = new;
> +		else
> +			parent->rb_right = new;
> +	} else
> +		root->rb_node = new;
> +}

I'm inclined to agree with Peter here - "inline" is now a vague,
pathetic and useless thing.  The problem is that the reader just
doesn't *know* whether or not the writer really wanted it to be
inlined.

If we have carefully made a decision to inline a function, we should
(now) use __always_inline.

If we have carefully made a decision to not inline a function, we
should use noinline.

If we don't care, we should omit all such markings.

This leaves no place for "inline"?


Marking it noinline shrinks the text by 60-odd bytes.  Given the number
of args, my gut feel is that this will be slower, despite the cache
benefit.  But that might be wrong.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 3/9] rbtree: place easiest case first in rb_erase()
  2012-08-20 22:05 ` [PATCH v3 3/9] rbtree: place easiest case first in rb_erase() Michel Lespinasse
@ 2012-08-20 22:28   ` Andrew Morton
  0 siblings, 0 replies; 16+ messages in thread
From: Andrew Morton @ 2012-08-20 22:28 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: riel, peterz, daniel.santos, aarcange, dwmw2, linux-mm,
	linux-kernel, torvalds

On Mon, 20 Aug 2012 15:05:25 -0700
Michel Lespinasse <walken@google.com> wrote:

> In rb_erase, move the easy case (node to erase has no more than
> 1 child) first. I feel the code reads easier that way.

Well.  For efficiency we should put the commonest case first.  Is that
the case here?

> --- a/lib/rbtree.c
> +++ b/lib/rbtree.c
> @@ -368,17 +368,28 @@ static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
>  
>  void rb_erase(struct rb_node *node, struct rb_root *root)
>  {
> -	struct rb_node *child, *parent;
> +	struct rb_node *child = node->rb_right, *tmp = node->rb_left;

Coding style nit: multiple-definitions-per-line makes it harder to
locate a particular definition, and from long experience I can assure
you that it makes management of subsequent overlapping patches quite a
lot harder.  Also, one-definition-per-line gives room for a nice little
comment, and we all like nice little comments.

Also, "tmp" is a rotten name.  Your choice of an identifier is your
opportunity to communicate something to the reader.  When you choose
"tmp", you threw away that opportunity.  Should it be called "left"?


--- a/lib/rbtree.c~rbtree-place-easiest-case-first-in-rb_erase-fix
+++ a/lib/rbtree.c
@@ -368,12 +368,13 @@ static void __rb_erase_color(struct rb_n
 
 void rb_erase(struct rb_node *node, struct rb_root *root)
 {
-	struct rb_node *child = node->rb_right, *tmp = node->rb_left;
+	struct rb_node *child = node->rb_right
+	struct rb_node *tmp = node->rb_left;
 	struct rb_node *parent;
 	int color;
 
 	if (!tmp) {
-	case1:
+case1:
 		/* Case 1: node to erase has no more than 1 child (easy!) */
 
 		parent = rb_parent(node);
_


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/9] rbtree: add __rb_change_child() helper function
  2012-08-20 22:17   ` Andrew Morton
@ 2012-08-23 12:05     ` Jan Engelhardt
  2012-08-23 13:08       ` David Woodhouse
  2012-09-26 22:34     ` Daniel Santos
  1 sibling, 1 reply; 16+ messages in thread
From: Jan Engelhardt @ 2012-08-23 12:05 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michel Lespinasse, riel, peterz, daniel.santos, aarcange, dwmw2,
	linux-mm, linux-kernel, torvalds


On Tuesday 2012-08-21 00:17, Andrew Morton wrote:
>
>If we have carefully made a decision to inline a function, we should
>(now) use __always_inline.
>If we have carefully made a decision to not inline a function, we
>should use noinline.
>
>If we don't care, we should omit all such markings.
>This leaves no place for "inline"?

The current use of "inline" is to shut up the compiler, otherwise gcc
would emit a warning about "function declared but not used".

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/9] rbtree: add __rb_change_child() helper function
  2012-08-23 12:05     ` Jan Engelhardt
@ 2012-08-23 13:08       ` David Woodhouse
  0 siblings, 0 replies; 16+ messages in thread
From: David Woodhouse @ 2012-08-23 13:08 UTC (permalink / raw)
  To: Jan Engelhardt
  Cc: Andrew Morton, Michel Lespinasse, riel, peterz, daniel.santos,
	aarcange, linux-mm, linux-kernel, torvalds

[-- Attachment #1: Type: text/plain, Size: 252 bytes --]

On Thu, 2012-08-23 at 14:05 +0200, Jan Engelhardt wrote:
> The current use of "inline" is to shut up the compiler, otherwise gcc
> would emit a warning about "function declared but not used". 

Don't we have __maybe_unused for that?

-- 
dwmw2

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 6171 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/9] rbtree: add __rb_change_child() helper function
  2012-08-20 22:17   ` Andrew Morton
  2012-08-23 12:05     ` Jan Engelhardt
@ 2012-09-26 22:34     ` Daniel Santos
  2012-09-26 22:43       ` Andrew Morton
  1 sibling, 1 reply; 16+ messages in thread
From: Daniel Santos @ 2012-09-26 22:34 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michel Lespinasse, riel, peterz, daniel.santos, aarcange, dwmw2,
	linux-mm, linux-kernel, torvalds

Sorry to resurrect the dead here, but I'm playing catch-up and this
looks important.

On 08/20/2012 05:17 PM, Andrew Morton wrote:
> I'm inclined to agree with Peter here - "inline" is now a vague,
> pathetic and useless thing.  The problem is that the reader just
> doesn't *know* whether or not the writer really wanted it to be
> inlined.
>
> If we have carefully made a decision to inline a function, we should
> (now) use __always_inline.
Are we all aware here that __always_inline (a.k.a.
"__attribute__((always_inline))") just means "inline even when not
optimizing"?  This appears to be a very common misunderstanding (unless
the gcc docs are wrong, see
http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html#index-g_t_0040code_007bflatten_007d-function-attribute-2512).

If you want to *force* gcc to inline a function (when inlining is
enabled), you can currently only do it from the calling function by
adding the |flatten attribute to it, which I have proposed adding here:
https://lkml.org/lkml/2012/9/25/643.

Thus, all of the __always_inline markings we have in the kernel only
affect unoptimized builds (and maybe -O1?).  If we need this feature
(and I think it would be darned handy!) we'll have to work on gcc to get it.

Daniel
|

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/9] rbtree: add __rb_change_child() helper function
  2012-09-26 22:34     ` Daniel Santos
@ 2012-09-26 22:43       ` Andrew Morton
  0 siblings, 0 replies; 16+ messages in thread
From: Andrew Morton @ 2012-09-26 22:43 UTC (permalink / raw)
  To: Daniel Santos
  Cc: Daniel Santos, Michel Lespinasse, riel, peterz, aarcange, dwmw2,
	linux-mm, linux-kernel, torvalds

On Wed, 26 Sep 2012 17:34:11 -0500
Daniel Santos <danielfsantos@att.net> wrote:

> Sorry to resurrect the dead here, but I'm playing catch-up and this
> looks important.
> 
> On 08/20/2012 05:17 PM, Andrew Morton wrote:
> > I'm inclined to agree with Peter here - "inline" is now a vague,
> > pathetic and useless thing.  The problem is that the reader just
> > doesn't *know* whether or not the writer really wanted it to be
> > inlined.
> >
> > If we have carefully made a decision to inline a function, we should
> > (now) use __always_inline.
> Are we all aware here that __always_inline (a.k.a.
> "__attribute__((always_inline))") just means "inline even when not
> optimizing"?  This appears to be a very common misunderstanding (unless
> the gcc docs are wrong, see
> http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html#index-g_t_0040code_007bflatten_007d-function-attribute-2512).
> 
> If you want to *force* gcc to inline a function (when inlining is
> enabled), you can currently only do it from the calling function by
> adding the |flatten attribute to it, which I have proposed adding here:
> https://lkml.org/lkml/2012/9/25/643.
> 
> Thus, all of the __always_inline markings we have in the kernel only
> affect unoptimized builds (and maybe -O1?).  If we need this feature
> (and I think it would be darned handy!) we'll have to work on gcc to get it.

When I replace the four __always_inline's in fs/namei.c with "inline",
namei.o's .text shrinks 2kbytes (gcc-4.4.4), so __always_inline does
allear to be doing what we think it does?



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2012-09-26 22:43 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-20 22:05 [PATCH v3 0/9] faster augmented rbtree interface Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 1/9] rbtree test: fix sparse warning about 64-bit constant Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 2/9] rbtree: add __rb_change_child() helper function Michel Lespinasse
2012-08-20 22:17   ` Andrew Morton
2012-08-23 12:05     ` Jan Engelhardt
2012-08-23 13:08       ` David Woodhouse
2012-09-26 22:34     ` Daniel Santos
2012-09-26 22:43       ` Andrew Morton
2012-08-20 22:05 ` [PATCH v3 3/9] rbtree: place easiest case first in rb_erase() Michel Lespinasse
2012-08-20 22:28   ` Andrew Morton
2012-08-20 22:05 ` [PATCH v3 4/9] rbtree: handle 1-child recoloring in rb_erase() instead of rb_erase_color() Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 5/9] rbtree: low level optimizations in rb_erase() Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 6/9] rbtree: augmented rbtree test Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 7/9] rbtree: faster augmented rbtree manipulation Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 8/9] rbtree: remove prior augmented rbtree implementation Michel Lespinasse
2012-08-20 22:05 ` [PATCH v3 9/9] rbtree: add RB_DECLARE_CALLBACKS() macro Michel Lespinasse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).