* [PATCH v2 0/7] Generic RB-tree helpers
@ 2021-01-25 15:09 Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 1/7] rbtree: Add generic add and find helpers Peter Zijlstra
` (7 more replies)
0 siblings, 8 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
Hai all,
I found myself needing to write yet another rbtree and remembered I had these
patches gathering dust. I've had them in a git tree pretty much ever since I
posted them last and the robot is telling me they build/work/dance/sing fine.
I'm proposing to stick them in tip and get on with life. What say you?
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 1/7] rbtree: Add generic add and find helpers
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
@ 2021-01-25 15:09 ` Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 2/7] rbtree, sched/fair: Use rb_add_cached() Peter Zijlstra
` (6 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
I've always been bothered by the endless (fragile) boilerplate for
rbtree, and I recently wrote some rbtree helpers for objtool and
figured I should lift them into the kernel and use them more widely.
Provide:
partial-order; less() based:
- rb_add(): add a new entry to the rbtree
- rb_add_cached(): like rb_add(), but for a rb_root_cached
total-order; cmp() based:
- rb_find(): find an entry in an rbtree
- rb_find_add(): find an entry, and add if not found
- rb_find_first(): find the first (leftmost) matching entry
- rb_next_match(): continue from rb_find_first()
- rb_for_each(): iterate a sub-tree using the previous two
Inlining and constant propagation should see the compiler inline the
whole thing, including the various compare functions.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Michel Lespinasse <walken@google.com>
---
include/linux/rbtree.h | 190 ++++++++++++++++++++++++++++++++++++++++++
tools/include/linux/rbtree.h | 192 ++++++++++++++++++++++++++++++++++++++++++-
tools/objtool/elf.c | 73 ++--------------
3 files changed, 392 insertions(+), 63 deletions(-)
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -158,4 +158,194 @@ static inline void rb_replace_node_cache
rb_replace_node(victim, new, &root->rb_root);
}
+/*
+ * The below helper functions use 2 operators with 3 different
+ * calling conventions. The operators are related like:
+ *
+ * comp(a->key,b) < 0 := less(a,b)
+ * comp(a->key,b) > 0 := less(b,a)
+ * comp(a->key,b) == 0 := !less(a,b) && !less(b,a)
+ *
+ * If these operators define a partial order on the elements we make no
+ * guarantee on which of the elements matching the key is found. See
+ * rb_find().
+ *
+ * The reason for this is to allow the find() interface without requiring an
+ * on-stack dummy object, which might not be feasible due to object size.
+ */
+
+/**
+ * rb_add_cached() - insert @node into the leftmost cached tree @tree
+ * @node: node to insert
+ * @tree: leftmost cached tree to insert @node into
+ * @less: operator defining the (partial) node order
+ */
+static __always_inline void
+rb_add_cached(struct rb_node *node, struct rb_root_cached *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *))
+{
+ struct rb_node **link = &tree->rb_root.rb_node;
+ struct rb_node *parent = NULL;
+ bool leftmost = true;
+
+ while (*link) {
+ parent = *link;
+ if (less(node, parent)) {
+ link = &parent->rb_left;
+ } else {
+ link = &parent->rb_right;
+ leftmost = false;
+ }
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color_cached(node, tree, leftmost);
+}
+
+/**
+ * rb_add() - insert @node into @tree
+ * @node: node to insert
+ * @tree: tree to insert @node into
+ * @less: operator defining the (partial) node order
+ */
+static __always_inline void
+rb_add(struct rb_node *node, struct rb_root *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *))
+{
+ struct rb_node **link = &tree->rb_node;
+ struct rb_node *parent = NULL;
+
+ while (*link) {
+ parent = *link;
+ if (less(node, parent))
+ link = &parent->rb_left;
+ else
+ link = &parent->rb_right;
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color(node, tree);
+}
+
+/**
+ * rb_find_add() - find equivalent @node in @tree, or add @node
+ * @node: node to look-for / insert
+ * @tree: tree to search / modify
+ * @cmp: operator defining the node order
+ *
+ * Returns the rb_node matching @node, or NULL when no match is found and @node
+ * is inserted.
+ */
+static __always_inline struct rb_node *
+rb_find_add(struct rb_node *node, struct rb_root *tree,
+ int (*cmp)(struct rb_node *, const struct rb_node *))
+{
+ struct rb_node **link = &tree->rb_node;
+ struct rb_node *parent = NULL;
+ int c;
+
+ while (*link) {
+ parent = *link;
+ c = cmp(node, parent);
+
+ if (c < 0)
+ link = &parent->rb_left;
+ else if (c > 0)
+ link = &parent->rb_right;
+ else
+ return parent;
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color(node, tree);
+ return NULL;
+}
+
+/**
+ * rb_find() - find @key in tree @tree
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining the node order
+ *
+ * Returns the rb_node matching @key or NULL.
+ */
+static __always_inline struct rb_node *
+rb_find(const void *key, const struct rb_root *tree,
+ int (*cmp)(const void *key, const struct rb_node *))
+{
+ struct rb_node *node = tree->rb_node;
+
+ while (node) {
+ int c = cmp(key, node);
+
+ if (c < 0)
+ node = node->rb_left;
+ else if (c > 0)
+ node = node->rb_right;
+ else
+ return node;
+ }
+
+ return NULL;
+}
+
+/**
+ * rb_find_first() - find the first @key in @tree
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining node order
+ *
+ * Returns the leftmost node matching @key, or NULL.
+ */
+static __always_inline struct rb_node *
+rb_find_first(const void *key, const struct rb_root *tree,
+ int (*cmp)(const void *key, const struct rb_node *))
+{
+ struct rb_node *node = tree->rb_node;
+ struct rb_node *match = NULL;
+
+ while (node) {
+ int c = cmp(key, node);
+
+ if (c <= 0) {
+ if (!c)
+ match = node;
+ node = node->rb_left;
+ } else if (c > 0) {
+ node = node->rb_right;
+ }
+ }
+
+ return match;
+}
+
+/**
+ * rb_next_match() - find the next @key in @tree
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining node order
+ *
+ * Returns the next node matching @key, or NULL.
+ */
+static __always_inline struct rb_node *
+rb_next_match(const void *key, struct rb_node *node,
+ int (*cmp)(const void *key, const struct rb_node *))
+{
+ node = rb_next(node);
+ if (node && cmp(key, node))
+ node = NULL;
+ return node;
+}
+
+/**
+ * rb_for_each() - iterates a subtree matching @key
+ * @node: iterator
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining node order
+ */
+#define rb_for_each(node, key, tree, cmp) \
+ for ((node) = rb_find_first((key), (tree), (cmp)); \
+ (node); (node) = rb_next_match((key), (node), (cmp)))
+
#endif /* _LINUX_RBTREE_H */
--- a/tools/include/linux/rbtree.h
+++ b/tools/include/linux/rbtree.h
@@ -152,4 +152,194 @@ static inline void rb_replace_node_cache
rb_replace_node(victim, new, &root->rb_root);
}
-#endif /* __TOOLS_LINUX_PERF_RBTREE_H */
+/*
+ * The below helper functions use 2 operators with 3 different
+ * calling conventions. The operators are related like:
+ *
+ * comp(a->key,b) < 0 := less(a,b)
+ * comp(a->key,b) > 0 := less(b,a)
+ * comp(a->key,b) == 0 := !less(a,b) && !less(b,a)
+ *
+ * If these operators define a partial order on the elements we make no
+ * guarantee on which of the elements matching the key is found. See
+ * rb_find().
+ *
+ * The reason for this is to allow the find() interface without requiring an
+ * on-stack dummy object, which might not be feasible due to object size.
+ */
+
+/**
+ * rb_add_cached() - insert @node into the leftmost cached tree @tree
+ * @node: node to insert
+ * @tree: leftmost cached tree to insert @node into
+ * @less: operator defining the (partial) node order
+ */
+static __always_inline void
+rb_add_cached(struct rb_node *node, struct rb_root_cached *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *))
+{
+ struct rb_node **link = &tree->rb_root.rb_node;
+ struct rb_node *parent = NULL;
+ bool leftmost = true;
+
+ while (*link) {
+ parent = *link;
+ if (less(node, parent)) {
+ link = &parent->rb_left;
+ } else {
+ link = &parent->rb_right;
+ leftmost = false;
+ }
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color_cached(node, tree, leftmost);
+}
+
+/**
+ * rb_add() - insert @node into @tree
+ * @node: node to insert
+ * @tree: tree to insert @node into
+ * @less: operator defining the (partial) node order
+ */
+static __always_inline void
+rb_add(struct rb_node *node, struct rb_root *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *))
+{
+ struct rb_node **link = &tree->rb_node;
+ struct rb_node *parent = NULL;
+
+ while (*link) {
+ parent = *link;
+ if (less(node, parent))
+ link = &parent->rb_left;
+ else
+ link = &parent->rb_right;
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color(node, tree);
+}
+
+/**
+ * rb_find_add() - find equivalent @node in @tree, or add @node
+ * @node: node to look-for / insert
+ * @tree: tree to search / modify
+ * @cmp: operator defining the node order
+ *
+ * Returns the rb_node matching @node, or NULL when no match is found and @node
+ * is inserted.
+ */
+static __always_inline struct rb_node *
+rb_find_add(struct rb_node *node, struct rb_root *tree,
+ int (*cmp)(struct rb_node *, const struct rb_node *))
+{
+ struct rb_node **link = &tree->rb_node;
+ struct rb_node *parent = NULL;
+ int c;
+
+ while (*link) {
+ parent = *link;
+ c = cmp(node, parent);
+
+ if (c < 0)
+ link = &parent->rb_left;
+ else if (c > 0)
+ link = &parent->rb_right;
+ else
+ return parent;
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color(node, tree);
+ return NULL;
+}
+
+/**
+ * rb_find() - find @key in tree @tree
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining the node order
+ *
+ * Returns the rb_node matching @key or NULL.
+ */
+static __always_inline struct rb_node *
+rb_find(const void *key, const struct rb_root *tree,
+ int (*cmp)(const void *key, const struct rb_node *))
+{
+ struct rb_node *node = tree->rb_node;
+
+ while (node) {
+ int c = cmp(key, node);
+
+ if (c < 0)
+ node = node->rb_left;
+ else if (c > 0)
+ node = node->rb_right;
+ else
+ return node;
+ }
+
+ return NULL;
+}
+
+/**
+ * rb_find_first() - find the first @key in @tree
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining node order
+ *
+ * Returns the leftmost node matching @key, or NULL.
+ */
+static __always_inline struct rb_node *
+rb_find_first(const void *key, const struct rb_root *tree,
+ int (*cmp)(const void *key, const struct rb_node *))
+{
+ struct rb_node *node = tree->rb_node;
+ struct rb_node *match = NULL;
+
+ while (node) {
+ int c = cmp(key, node);
+
+ if (c <= 0) {
+ if (!c)
+ match = node;
+ node = node->rb_left;
+ } else if (c > 0) {
+ node = node->rb_right;
+ }
+ }
+
+ return match;
+}
+
+/**
+ * rb_next_match() - find the next @key in @tree
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining node order
+ *
+ * Returns the next node matching @key, or NULL.
+ */
+static __always_inline struct rb_node *
+rb_next_match(const void *key, struct rb_node *node,
+ int (*cmp)(const void *key, const struct rb_node *))
+{
+ node = rb_next(node);
+ if (node && cmp(key, node))
+ node = NULL;
+ return node;
+}
+
+/**
+ * rb_for_each() - iterates a subtree matching @key
+ * @node: iterator
+ * @key: key to match
+ * @tree: tree to search
+ * @cmp: operator defining node order
+ */
+#define rb_for_each(node, key, tree, cmp) \
+ for ((node) = rb_find_first((key), (tree), (cmp)); \
+ (node); (node) = rb_next_match((key), (node), (cmp)))
+
+#endif /* __TOOLS_LINUX_PERF_RBTREE_H */
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -43,75 +43,24 @@ static void elf_hash_init(struct hlist_h
#define elf_hash_for_each_possible(name, obj, member, key) \
hlist_for_each_entry(obj, &name[hash_min(key, elf_hash_bits())], member)
-static void rb_add(struct rb_root *tree, struct rb_node *node,
- int (*cmp)(struct rb_node *, const struct rb_node *))
-{
- struct rb_node **link = &tree->rb_node;
- struct rb_node *parent = NULL;
-
- while (*link) {
- parent = *link;
- if (cmp(node, parent) < 0)
- link = &parent->rb_left;
- else
- link = &parent->rb_right;
- }
-
- rb_link_node(node, parent, link);
- rb_insert_color(node, tree);
-}
-
-static struct rb_node *rb_find_first(const struct rb_root *tree, const void *key,
- int (*cmp)(const void *key, const struct rb_node *))
-{
- struct rb_node *node = tree->rb_node;
- struct rb_node *match = NULL;
-
- while (node) {
- int c = cmp(key, node);
- if (c <= 0) {
- if (!c)
- match = node;
- node = node->rb_left;
- } else if (c > 0) {
- node = node->rb_right;
- }
- }
-
- return match;
-}
-
-static struct rb_node *rb_next_match(struct rb_node *node, const void *key,
- int (*cmp)(const void *key, const struct rb_node *))
-{
- node = rb_next(node);
- if (node && cmp(key, node))
- node = NULL;
- return node;
-}
-
-#define rb_for_each(tree, node, key, cmp) \
- for ((node) = rb_find_first((tree), (key), (cmp)); \
- (node); (node) = rb_next_match((node), (key), (cmp)))
-
-static int symbol_to_offset(struct rb_node *a, const struct rb_node *b)
+static bool symbol_to_offset(struct rb_node *a, const struct rb_node *b)
{
struct symbol *sa = rb_entry(a, struct symbol, node);
struct symbol *sb = rb_entry(b, struct symbol, node);
if (sa->offset < sb->offset)
- return -1;
+ return true;
if (sa->offset > sb->offset)
- return 1;
+ return false;
if (sa->len < sb->len)
- return -1;
+ return true;
if (sa->len > sb->len)
- return 1;
+ return false;
sa->alias = sb;
- return 0;
+ return false;
}
static int symbol_by_offset(const void *key, const struct rb_node *node)
@@ -165,7 +114,7 @@ struct symbol *find_symbol_by_offset(str
{
struct rb_node *node;
- rb_for_each(&sec->symbol_tree, node, &offset, symbol_by_offset) {
+ rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
struct symbol *s = rb_entry(node, struct symbol, node);
if (s->offset == offset && s->type != STT_SECTION)
@@ -179,7 +128,7 @@ struct symbol *find_func_by_offset(struc
{
struct rb_node *node;
- rb_for_each(&sec->symbol_tree, node, &offset, symbol_by_offset) {
+ rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
struct symbol *s = rb_entry(node, struct symbol, node);
if (s->offset == offset && s->type == STT_FUNC)
@@ -193,7 +142,7 @@ struct symbol *find_symbol_containing(co
{
struct rb_node *node;
- rb_for_each(&sec->symbol_tree, node, &offset, symbol_by_offset) {
+ rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
struct symbol *s = rb_entry(node, struct symbol, node);
if (s->type != STT_SECTION)
@@ -207,7 +156,7 @@ struct symbol *find_func_containing(stru
{
struct rb_node *node;
- rb_for_each(&sec->symbol_tree, node, &offset, symbol_by_offset) {
+ rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
struct symbol *s = rb_entry(node, struct symbol, node);
if (s->type == STT_FUNC)
@@ -439,7 +388,7 @@ static int read_symbols(struct elf *elf)
sym->offset = sym->sym.st_value;
sym->len = sym->sym.st_size;
- rb_add(&sym->sec->symbol_tree, &sym->node, symbol_to_offset);
+ rb_add(&sym->node, &sym->sec->symbol_tree, symbol_to_offset);
pnode = rb_prev(&sym->node);
if (pnode)
entry = &rb_entry(pnode, struct symbol, node)->list;
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 2/7] rbtree, sched/fair: Use rb_add_cached()
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 1/7] rbtree: Add generic add and find helpers Peter Zijlstra
@ 2021-01-25 15:09 ` Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 3/7] rbtree, sched/deadline: " Peter Zijlstra
` (5 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
Reduce rbtree boiler plate by using the new helper function.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/sched/fair.c | 46 ++++++++++++++--------------------------------
1 file changed, 14 insertions(+), 32 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -533,12 +533,15 @@ static inline u64 min_vruntime(u64 min_v
return min_vruntime;
}
-static inline int entity_before(struct sched_entity *a,
+static inline bool entity_before(struct sched_entity *a,
struct sched_entity *b)
{
return (s64)(a->vruntime - b->vruntime) < 0;
}
+#define __node_2_se(node) \
+ rb_entry((node), struct sched_entity, run_node)
+
static void update_min_vruntime(struct cfs_rq *cfs_rq)
{
struct sched_entity *curr = cfs_rq->curr;
@@ -554,8 +557,7 @@ static void update_min_vruntime(struct c
}
if (leftmost) { /* non-empty tree */
- struct sched_entity *se;
- se = rb_entry(leftmost, struct sched_entity, run_node);
+ struct sched_entity *se = __node_2_se(leftmost);
if (!curr)
vruntime = se->vruntime;
@@ -571,37 +573,17 @@ static void update_min_vruntime(struct c
#endif
}
+static inline bool __entity_less(struct rb_node *a, const struct rb_node *b)
+{
+ return entity_before(__node_2_se(a), __node_2_se(b));
+}
+
/*
* Enqueue an entity into the rb-tree:
*/
static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
- struct rb_node **link = &cfs_rq->tasks_timeline.rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct sched_entity *entry;
- bool leftmost = true;
-
- /*
- * Find the right place in the rbtree:
- */
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct sched_entity, run_node);
- /*
- * We dont care about collisions. Nodes with
- * the same key stay together.
- */
- if (entity_before(se, entry)) {
- link = &parent->rb_left;
- } else {
- link = &parent->rb_right;
- leftmost = false;
- }
- }
-
- rb_link_node(&se->run_node, parent, link);
- rb_insert_color_cached(&se->run_node,
- &cfs_rq->tasks_timeline, leftmost);
+ rb_add_cached(&se->run_node, &cfs_rq->tasks_timeline, __entity_less);
}
static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
@@ -616,7 +598,7 @@ struct sched_entity *__pick_first_entity
if (!left)
return NULL;
- return rb_entry(left, struct sched_entity, run_node);
+ return __node_2_se(left);
}
static struct sched_entity *__pick_next_entity(struct sched_entity *se)
@@ -626,7 +608,7 @@ static struct sched_entity *__pick_next_
if (!next)
return NULL;
- return rb_entry(next, struct sched_entity, run_node);
+ return __node_2_se(next);
}
#ifdef CONFIG_SCHED_DEBUG
@@ -637,7 +619,7 @@ struct sched_entity *__pick_last_entity(
if (!last)
return NULL;
- return rb_entry(last, struct sched_entity, run_node);
+ return __node_2_se(last);
}
/**************************************************************
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/7] rbtree, sched/deadline: Use rb_add_cached()
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 1/7] rbtree: Add generic add and find helpers Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 2/7] rbtree, sched/fair: Use rb_add_cached() Peter Zijlstra
@ 2021-01-25 15:09 ` Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 4/7] rbtree, perf: Use new rbtree helpers Peter Zijlstra
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
Reduce rbtree boiler plate by using the new helpers.
Make rb_add_cached() / rb_erase_cached() return a pointer to the
leftmost node to aid in updating additional state.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/rbtree.h | 18 ++++++++---
kernel/sched/deadline.c | 77 +++++++++++++++++-------------------------------
2 files changed, 42 insertions(+), 53 deletions(-)
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -141,12 +141,18 @@ static inline void rb_insert_color_cache
rb_insert_color(node, &root->rb_root);
}
-static inline void rb_erase_cached(struct rb_node *node,
- struct rb_root_cached *root)
+
+static inline struct rb_node *
+rb_erase_cached(struct rb_node *node, struct rb_root_cached *root)
{
+ struct rb_node *leftmost = NULL;
+
if (root->rb_leftmost == node)
- root->rb_leftmost = rb_next(node);
+ leftmost = root->rb_leftmost = rb_next(node);
+
rb_erase(node, &root->rb_root);
+
+ return leftmost;
}
static inline void rb_replace_node_cached(struct rb_node *victim,
@@ -179,8 +185,10 @@ static inline void rb_replace_node_cache
* @node: node to insert
* @tree: leftmost cached tree to insert @node into
* @less: operator defining the (partial) node order
+ *
+ * Returns @node when it is the new leftmost, or NULL.
*/
-static __always_inline void
+static __always_inline struct rb_node *
rb_add_cached(struct rb_node *node, struct rb_root_cached *tree,
bool (*less)(struct rb_node *, const struct rb_node *))
{
@@ -200,6 +208,8 @@ rb_add_cached(struct rb_node *node, stru
rb_link_node(node, parent, link);
rb_insert_color_cached(node, tree, leftmost);
+
+ return leftmost ? node : NULL;
}
/**
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -517,58 +517,44 @@ static void dec_dl_migration(struct sche
update_dl_migration(dl_rq);
}
+#define __node_2_pdl(node) \
+ rb_entry((node), struct task_struct, pushable_dl_tasks)
+
+static inline bool __pushable_less(struct rb_node *a, const struct rb_node *b)
+{
+ return dl_entity_preempt(&__node_2_pdl(a)->dl, &__node_2_pdl(b)->dl);
+}
+
/*
* The list of pushable -deadline task is not a plist, like in
* sched_rt.c, it is an rb-tree with tasks ordered by deadline.
*/
static void enqueue_pushable_dl_task(struct rq *rq, struct task_struct *p)
{
- struct dl_rq *dl_rq = &rq->dl;
- struct rb_node **link = &dl_rq->pushable_dl_tasks_root.rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct task_struct *entry;
- bool leftmost = true;
+ struct rb_node *leftmost;
BUG_ON(!RB_EMPTY_NODE(&p->pushable_dl_tasks));
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct task_struct,
- pushable_dl_tasks);
- if (dl_entity_preempt(&p->dl, &entry->dl))
- link = &parent->rb_left;
- else {
- link = &parent->rb_right;
- leftmost = false;
- }
- }
-
+ leftmost = rb_add_cached(&p->pushable_dl_tasks,
+ &rq->dl.pushable_dl_tasks_root,
+ __pushable_less);
if (leftmost)
- dl_rq->earliest_dl.next = p->dl.deadline;
-
- rb_link_node(&p->pushable_dl_tasks, parent, link);
- rb_insert_color_cached(&p->pushable_dl_tasks,
- &dl_rq->pushable_dl_tasks_root, leftmost);
+ rq->dl.earliest_dl.next = p->dl.deadline;
}
static void dequeue_pushable_dl_task(struct rq *rq, struct task_struct *p)
{
struct dl_rq *dl_rq = &rq->dl;
+ struct rb_root_cached *root = &dl_rq->pushable_dl_tasks_root;
+ struct rb_node *leftmost;
if (RB_EMPTY_NODE(&p->pushable_dl_tasks))
return;
- if (dl_rq->pushable_dl_tasks_root.rb_leftmost == &p->pushable_dl_tasks) {
- struct rb_node *next_node;
-
- next_node = rb_next(&p->pushable_dl_tasks);
- if (next_node) {
- dl_rq->earliest_dl.next = rb_entry(next_node,
- struct task_struct, pushable_dl_tasks)->dl.deadline;
- }
- }
+ leftmost = rb_erase_cached(&p->pushable_dl_tasks, root);
+ if (leftmost)
+ dl_rq->earliest_dl.next = __node_2_pdl(leftmost)->dl.deadline;
- rb_erase_cached(&p->pushable_dl_tasks, &dl_rq->pushable_dl_tasks_root);
RB_CLEAR_NODE(&p->pushable_dl_tasks);
}
@@ -1478,29 +1464,21 @@ void dec_dl_tasks(struct sched_dl_entity
dec_dl_migration(dl_se, dl_rq);
}
+#define __node_2_dle(node) \
+ rb_entry((node), struct sched_dl_entity, rb_node)
+
+static inline bool __dl_less(struct rb_node *a, const struct rb_node *b)
+{
+ return dl_time_before(__node_2_dle(a)->deadline, __node_2_dle(b)->deadline);
+}
+
static void __enqueue_dl_entity(struct sched_dl_entity *dl_se)
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
- struct rb_node **link = &dl_rq->root.rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct sched_dl_entity *entry;
- int leftmost = 1;
BUG_ON(!RB_EMPTY_NODE(&dl_se->rb_node));
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct sched_dl_entity, rb_node);
- if (dl_time_before(dl_se->deadline, entry->deadline))
- link = &parent->rb_left;
- else {
- link = &parent->rb_right;
- leftmost = 0;
- }
- }
-
- rb_link_node(&dl_se->rb_node, parent, link);
- rb_insert_color_cached(&dl_se->rb_node, &dl_rq->root, leftmost);
+ rb_add_cached(&dl_se->rb_node, &dl_rq->root, __dl_less);
inc_dl_tasks(dl_se, dl_rq);
}
@@ -1513,6 +1491,7 @@ static void __dequeue_dl_entity(struct s
return;
rb_erase_cached(&dl_se->rb_node, &dl_rq->root);
+
RB_CLEAR_NODE(&dl_se->rb_node);
dec_dl_tasks(dl_se, dl_rq);
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 4/7] rbtree, perf: Use new rbtree helpers
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
` (2 preceding siblings ...)
2021-01-25 15:09 ` [PATCH v2 3/7] rbtree, sched/deadline: " Peter Zijlstra
@ 2021-01-25 15:09 ` Peter Zijlstra
2021-01-25 15:20 ` Tejun Heo
2021-01-25 15:09 ` [PATCH v2 5/7] rbtree, uprobes: Use " Peter Zijlstra
` (3 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz, Tejun Heo
Reduce rbtree boiler plate by using the new helpers.
One noteworthy change is unification of the various (partial) compare
functions. We construct a subtree match by forcing the sub-order to
always match, see __group_cmp().
Due to 'const' we had to touch cgroup_id().
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/cgroup.h | 4
kernel/events/core.c | 201 ++++++++++++++++++++++---------------------------
2 files changed, 95 insertions(+), 110 deletions(-)
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -307,7 +307,7 @@ void css_task_iter_end(struct css_task_i
* Inline functions.
*/
-static inline u64 cgroup_id(struct cgroup *cgrp)
+static inline u64 cgroup_id(const struct cgroup *cgrp)
{
return cgrp->kn->id;
}
@@ -701,7 +701,7 @@ void cgroup_path_from_kernfs_id(u64 id,
struct cgroup_subsys_state;
struct cgroup;
-static inline u64 cgroup_id(struct cgroup *cgrp) { return 1; }
+static inline u64 cgroup_id(const struct cgroup *cgrp) { return 1; }
static inline void css_get(struct cgroup_subsys_state *css) {}
static inline void css_put(struct cgroup_subsys_state *css) {}
static inline int cgroup_attach_task_all(struct task_struct *from,
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1575,50 +1575,91 @@ static void perf_event_groups_init(struc
groups->index = 0;
}
+static inline struct cgroup *event_cgroup(const struct perf_event *event)
+{
+ struct cgroup *cgroup = NULL;
+
+#ifdef CONFIG_CGROUP_PERF
+ if (event->cgrp)
+ cgroup = event->cgrp->css.cgroup;
+#endif
+
+ return cgroup;
+}
+
/*
* Compare function for event groups;
*
* Implements complex key that first sorts by CPU and then by virtual index
* which provides ordering when rotating groups for the same CPU.
*/
-static bool
-perf_event_groups_less(struct perf_event *left, struct perf_event *right)
-{
- if (left->cpu < right->cpu)
- return true;
- if (left->cpu > right->cpu)
- return false;
+static __always_inline int
+perf_event_groups_cmp(const int left_cpu, const struct cgroup *left_cgroup,
+ const u64 left_group_index, const struct perf_event *right)
+{
+ if (left_cpu < right->cpu)
+ return -1;
+ if (left_cpu > right->cpu)
+ return 1;
#ifdef CONFIG_CGROUP_PERF
- if (left->cgrp != right->cgrp) {
- if (!left->cgrp || !left->cgrp->css.cgroup) {
- /*
- * Left has no cgroup but right does, no cgroups come
- * first.
- */
- return true;
- }
- if (!right->cgrp || !right->cgrp->css.cgroup) {
- /*
- * Right has no cgroup but left does, no cgroups come
- * first.
- */
- return false;
- }
- /* Two dissimilar cgroups, order by id. */
- if (left->cgrp->css.cgroup->kn->id < right->cgrp->css.cgroup->kn->id)
- return true;
+ {
+ const struct cgroup *right_cgroup = event_cgroup(right);
+
+ if (left_cgroup != right_cgroup) {
+ if (!left_cgroup) {
+ /*
+ * Left has no cgroup but right does, no
+ * cgroups come first.
+ */
+ return -1;
+ }
+ if (!right_cgroup) {
+ /*
+ * Right has no cgroup but left does, no
+ * cgroups come first.
+ */
+ return 1;
+ }
+ /* Two dissimilar cgroups, order by id. */
+ if (cgroup_id(left_cgroup) < cgroup_id(right_cgroup))
+ return -1;
- return false;
+ return 1;
+ }
}
#endif
- if (left->group_index < right->group_index)
- return true;
- if (left->group_index > right->group_index)
- return false;
+ if (left_group_index < right->group_index)
+ return -1;
+ if (left_group_index > right->group_index)
+ return 1;
+
+ return 0;
+}
+
+#define __node_2_pe(node) \
+ rb_entry((node), struct perf_event, group_node)
- return false;
+static inline bool __group_less(struct rb_node *a, const struct rb_node *b)
+{
+ struct perf_event *e = __node_2_pe(a);
+ return perf_event_groups_cmp(e->cpu, event_cgroup(e), e->group_index,
+ __node_2_pe(b)) < 0;
+}
+
+struct __group_key {
+ int cpu;
+ struct cgroup *cgroup;
+};
+
+static inline int __group_cmp(const void *key, const struct rb_node *node)
+{
+ const struct __group_key *a = key;
+ const struct perf_event *b = __node_2_pe(node);
+
+ /* partial/subtree match: @cpu, @cgroup; ignore: @group_index */
+ return perf_event_groups_cmp(a->cpu, a->cgroup, b->group_index, b);
}
/*
@@ -1630,27 +1671,9 @@ static void
perf_event_groups_insert(struct perf_event_groups *groups,
struct perf_event *event)
{
- struct perf_event *node_event;
- struct rb_node *parent;
- struct rb_node **node;
-
event->group_index = ++groups->index;
- node = &groups->tree.rb_node;
- parent = *node;
-
- while (*node) {
- parent = *node;
- node_event = container_of(*node, struct perf_event, group_node);
-
- if (perf_event_groups_less(event, node_event))
- node = &parent->rb_left;
- else
- node = &parent->rb_right;
- }
-
- rb_link_node(&event->group_node, parent, node);
- rb_insert_color(&event->group_node, &groups->tree);
+ rb_add(&event->group_node, &groups->tree, __group_less);
}
/*
@@ -1698,45 +1721,17 @@ static struct perf_event *
perf_event_groups_first(struct perf_event_groups *groups, int cpu,
struct cgroup *cgrp)
{
- struct perf_event *node_event = NULL, *match = NULL;
- struct rb_node *node = groups->tree.rb_node;
-#ifdef CONFIG_CGROUP_PERF
- u64 node_cgrp_id, cgrp_id = 0;
-
- if (cgrp)
- cgrp_id = cgrp->kn->id;
-#endif
-
- while (node) {
- node_event = container_of(node, struct perf_event, group_node);
-
- if (cpu < node_event->cpu) {
- node = node->rb_left;
- continue;
- }
- if (cpu > node_event->cpu) {
- node = node->rb_right;
- continue;
- }
-#ifdef CONFIG_CGROUP_PERF
- node_cgrp_id = 0;
- if (node_event->cgrp && node_event->cgrp->css.cgroup)
- node_cgrp_id = node_event->cgrp->css.cgroup->kn->id;
-
- if (cgrp_id < node_cgrp_id) {
- node = node->rb_left;
- continue;
- }
- if (cgrp_id > node_cgrp_id) {
- node = node->rb_right;
- continue;
- }
-#endif
- match = node_event;
- node = node->rb_left;
- }
+ struct __group_key key = {
+ .cpu = cpu,
+ .cgroup = cgrp,
+ };
+ struct rb_node *node;
+
+ node = rb_find_first(&key, &groups->tree, __group_cmp);
+ if (node)
+ return __node_2_pe(node);
- return match;
+ return NULL;
}
/*
@@ -1745,27 +1740,17 @@ perf_event_groups_first(struct perf_even
static struct perf_event *
perf_event_groups_next(struct perf_event *event)
{
- struct perf_event *next;
-#ifdef CONFIG_CGROUP_PERF
- u64 curr_cgrp_id = 0;
- u64 next_cgrp_id = 0;
-#endif
+ struct __group_key key = {
+ .cpu = event->cpu,
+ .cgroup = event_cgroup(event),
+ };
+ struct rb_node *next;
+
+ next = rb_next_match(&key, &event->group_node, __group_cmp);
+ if (next)
+ return __node_2_pe(next);
- next = rb_entry_safe(rb_next(&event->group_node), typeof(*event), group_node);
- if (next == NULL || next->cpu != event->cpu)
- return NULL;
-
-#ifdef CONFIG_CGROUP_PERF
- if (event->cgrp && event->cgrp->css.cgroup)
- curr_cgrp_id = event->cgrp->css.cgroup->kn->id;
-
- if (next->cgrp && next->cgrp->css.cgroup)
- next_cgrp_id = next->cgrp->css.cgroup->kn->id;
-
- if (curr_cgrp_id != next_cgrp_id)
- return NULL;
-#endif
- return next;
+ return NULL;
}
/*
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 5/7] rbtree, uprobes: Use rbtree helpers
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
` (3 preceding siblings ...)
2021-01-25 15:09 ` [PATCH v2 4/7] rbtree, perf: Use new rbtree helpers Peter Zijlstra
@ 2021-01-25 15:09 ` Peter Zijlstra
2021-01-26 5:59 ` Davidlohr Bueso
2021-01-25 15:09 ` [PATCH v2 6/7] rbtree, rtmutex: Use rb_add_cached() Peter Zijlstra
` (2 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
Reduce rbtree boilerplate by using the new helpers.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/events/uprobes.c | 82 +++++++++++++++++++++++-------------------------
1 file changed, 40 insertions(+), 42 deletions(-)
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -619,41 +619,56 @@ static void put_uprobe(struct uprobe *up
}
}
-static int match_uprobe(struct uprobe *l, struct uprobe *r)
+static __always_inline
+int uprobe_cmp(const struct inode *l_inode, const loff_t l_offset,
+ const struct uprobe *r)
{
- if (l->inode < r->inode)
+ if (l_inode < r->inode)
return -1;
- if (l->inode > r->inode)
+ if (l_inode > r->inode)
return 1;
- if (l->offset < r->offset)
+ if (l_offset < r->offset)
return -1;
- if (l->offset > r->offset)
+ if (l_offset > r->offset)
return 1;
return 0;
}
+#define __node_2_uprobe(node) \
+ rb_entry((node), struct uprobe, rb_node)
+
+struct __uprobe_key {
+ struct inode *inode;
+ loff_t offset;
+};
+
+static inline int __uprobe_cmp_key(const void *key, const struct rb_node *b)
+{
+ const struct __uprobe_key *a = key;
+ return uprobe_cmp(a->inode, a->offset, __node_2_uprobe(b));
+}
+
+static inline int __uprobe_cmp(struct rb_node *a, const struct rb_node *b)
+{
+ struct uprobe *u = __node_2_uprobe(a);
+ return uprobe_cmp(u->inode, u->offset, __node_2_uprobe(b));
+}
+
static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset)
{
- struct uprobe u = { .inode = inode, .offset = offset };
- struct rb_node *n = uprobes_tree.rb_node;
- struct uprobe *uprobe;
- int match;
+ struct __uprobe_key key = {
+ .inode = inode,
+ .offset = offset,
+ };
+ struct rb_node *node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key);
+
+ if (node)
+ return __node_2_uprobe(node);
- while (n) {
- uprobe = rb_entry(n, struct uprobe, rb_node);
- match = match_uprobe(&u, uprobe);
- if (!match)
- return get_uprobe(uprobe);
-
- if (match < 0)
- n = n->rb_left;
- else
- n = n->rb_right;
- }
return NULL;
}
@@ -674,32 +689,15 @@ static struct uprobe *find_uprobe(struct
static struct uprobe *__insert_uprobe(struct uprobe *uprobe)
{
- struct rb_node **p = &uprobes_tree.rb_node;
- struct rb_node *parent = NULL;
- struct uprobe *u;
- int match;
-
- while (*p) {
- parent = *p;
- u = rb_entry(parent, struct uprobe, rb_node);
- match = match_uprobe(uprobe, u);
- if (!match)
- return get_uprobe(u);
-
- if (match < 0)
- p = &parent->rb_left;
- else
- p = &parent->rb_right;
+ struct rb_node *node;
- }
+ node = rb_find_add(&uprobe->rb_node, &uprobes_tree, __uprobe_cmp);
+ if (node)
+ return get_uprobe(__node_2_uprobe(node));
- u = NULL;
- rb_link_node(&uprobe->rb_node, parent, p);
- rb_insert_color(&uprobe->rb_node, &uprobes_tree);
/* get access + creation ref */
refcount_set(&uprobe->ref, 2);
-
- return u;
+ return NULL;
}
/*
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 6/7] rbtree, rtmutex: Use rb_add_cached()
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
` (4 preceding siblings ...)
2021-01-25 15:09 ` [PATCH v2 5/7] rbtree, uprobes: Use " Peter Zijlstra
@ 2021-01-25 15:09 ` Peter Zijlstra
2021-01-25 15:10 ` [PATCH v2 7/7] rbtree, timerqueue: " Peter Zijlstra
2021-01-26 6:04 ` [PATCH v2 0/7] Generic RB-tree helpers Davidlohr Bueso
7 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:09 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
Reduce rbtree boiler plate by using the new helpers.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/locking/rtmutex.c | 54 +++++++++++++++--------------------------------
1 file changed, 18 insertions(+), 36 deletions(-)
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -267,27 +267,18 @@ rt_mutex_waiter_equal(struct rt_mutex_wa
return 1;
}
+#define __node_2_waiter(node) \
+ rb_entry((node), struct rt_mutex_waiter, tree_entry)
+
+static inline bool __waiter_less(struct rb_node *a, const struct rb_node *b)
+{
+ return rt_mutex_waiter_less(__node_2_waiter(a), __node_2_waiter(b));
+}
+
static void
rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
{
- struct rb_node **link = &lock->waiters.rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct rt_mutex_waiter *entry;
- bool leftmost = true;
-
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct rt_mutex_waiter, tree_entry);
- if (rt_mutex_waiter_less(waiter, entry)) {
- link = &parent->rb_left;
- } else {
- link = &parent->rb_right;
- leftmost = false;
- }
- }
-
- rb_link_node(&waiter->tree_entry, parent, link);
- rb_insert_color_cached(&waiter->tree_entry, &lock->waiters, leftmost);
+ rb_add_cached(&waiter->tree_entry, &lock->waiters, __waiter_less);
}
static void
@@ -300,27 +291,18 @@ rt_mutex_dequeue(struct rt_mutex *lock,
RB_CLEAR_NODE(&waiter->tree_entry);
}
+#define __node_2_pi_waiter(node) \
+ rb_entry((node), struct rt_mutex_waiter, pi_tree_entry)
+
+static inline bool __pi_waiter_less(struct rb_node *a, const struct rb_node *b)
+{
+ return rt_mutex_waiter_less(__node_2_pi_waiter(a), __node_2_pi_waiter(b));
+}
+
static void
rt_mutex_enqueue_pi(struct task_struct *task, struct rt_mutex_waiter *waiter)
{
- struct rb_node **link = &task->pi_waiters.rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct rt_mutex_waiter *entry;
- bool leftmost = true;
-
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct rt_mutex_waiter, pi_tree_entry);
- if (rt_mutex_waiter_less(waiter, entry)) {
- link = &parent->rb_left;
- } else {
- link = &parent->rb_right;
- leftmost = false;
- }
- }
-
- rb_link_node(&waiter->pi_tree_entry, parent, link);
- rb_insert_color_cached(&waiter->pi_tree_entry, &task->pi_waiters, leftmost);
+ rb_add_cached(&waiter->pi_tree_entry, &task->pi_waiters, __pi_waiter_less);
}
static void
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 7/7] rbtree, timerqueue: Use rb_add_cached()
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
` (5 preceding siblings ...)
2021-01-25 15:09 ` [PATCH v2 6/7] rbtree, rtmutex: Use rb_add_cached() Peter Zijlstra
@ 2021-01-25 15:10 ` Peter Zijlstra
2021-01-26 6:04 ` [PATCH v2 0/7] Generic RB-tree helpers Davidlohr Bueso
7 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-25 15:10 UTC (permalink / raw)
To: linux-kernel
Cc: walken, dave, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot, peterz
Reduce rbtree boiler plate by using the new helpers.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
lib/timerqueue.c | 28 +++++++++-------------------
1 file changed, 9 insertions(+), 19 deletions(-)
--- a/lib/timerqueue.c
+++ b/lib/timerqueue.c
@@ -14,6 +14,14 @@
#include <linux/rbtree.h>
#include <linux/export.h>
+#define __node_2_tq(_n) \
+ rb_entry((_n), struct timerqueue_node, node)
+
+static inline bool __timerqueue_less(struct rb_node *a, const struct rb_node *b)
+{
+ return __node_2_tq(a)->expires < __node_2_tq(b)->expires;
+}
+
/**
* timerqueue_add - Adds timer to timerqueue.
*
@@ -26,28 +34,10 @@
*/
bool timerqueue_add(struct timerqueue_head *head, struct timerqueue_node *node)
{
- struct rb_node **p = &head->rb_root.rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct timerqueue_node *ptr;
- bool leftmost = true;
-
/* Make sure we don't add nodes that are already added */
WARN_ON_ONCE(!RB_EMPTY_NODE(&node->node));
- while (*p) {
- parent = *p;
- ptr = rb_entry(parent, struct timerqueue_node, node);
- if (node->expires < ptr->expires) {
- p = &(*p)->rb_left;
- } else {
- p = &(*p)->rb_right;
- leftmost = false;
- }
- }
- rb_link_node(&node->node, parent, p);
- rb_insert_color_cached(&node->node, &head->rb_root, leftmost);
-
- return leftmost;
+ return rb_add_cached(&node->node, &head->rb_root, __timerqueue_less);
}
EXPORT_SYMBOL_GPL(timerqueue_add);
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 4/7] rbtree, perf: Use new rbtree helpers
2021-01-25 15:09 ` [PATCH v2 4/7] rbtree, perf: Use new rbtree helpers Peter Zijlstra
@ 2021-01-25 15:20 ` Tejun Heo
0 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2021-01-25 15:20 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, walken, dave, mingo, tglx, oleg, irogers,
juri.lelli, vincent.guittot
On Mon, Jan 25, 2021 at 04:09:57PM +0100, Peter Zijlstra wrote:
> Reduce rbtree boiler plate by using the new helpers.
>
> One noteworthy change is unification of the various (partial) compare
> functions. We construct a subtree match by forcing the sub-order to
> always match, see __group_cmp().
>
> Due to 'const' we had to touch cgroup_id().
>
> Cc: Tejun Heo <tj@kernel.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Tejun Heo <tj@kernel.org>
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/7] rbtree, uprobes: Use rbtree helpers
2021-01-25 15:09 ` [PATCH v2 5/7] rbtree, uprobes: Use " Peter Zijlstra
@ 2021-01-26 5:59 ` Davidlohr Bueso
2021-01-26 9:30 ` Peter Zijlstra
0 siblings, 1 reply; 12+ messages in thread
From: Davidlohr Bueso @ 2021-01-26 5:59 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, walken, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot
On Mon, 25 Jan 2021, Peter Zijlstra wrote:
>Reduce rbtree boilerplate by using the new helpers.
This reminds me of:
https://lore.kernel.org/lkml/20200207180305.11092-6-dave@stgolabs.net/
Would a walk optimization (list+rbtree) serve here? Not sure how big
the uprobes_trees gets.
Thanks,
Davidlohr
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 0/7] Generic RB-tree helpers
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
` (6 preceding siblings ...)
2021-01-25 15:10 ` [PATCH v2 7/7] rbtree, timerqueue: " Peter Zijlstra
@ 2021-01-26 6:04 ` Davidlohr Bueso
7 siblings, 0 replies; 12+ messages in thread
From: Davidlohr Bueso @ 2021-01-26 6:04 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, walken, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot
On Mon, 25 Jan 2021, Peter Zijlstra wrote:
>
>Hai all,
>
>I found myself needing to write yet another rbtree and remembered I had these
>patches gathering dust. I've had them in a git tree pretty much ever since I
>posted them last and the robot is telling me they build/work/dance/sing fine.
>
>I'm proposing to stick them in tip and get on with life. What say you?
I like consolidating this code with helpers, reduces LoC and improves
readability imo. Feel free to add my:
Acked-by: Davidlohr Bueso <dbueso@suse.de>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/7] rbtree, uprobes: Use rbtree helpers
2021-01-26 5:59 ` Davidlohr Bueso
@ 2021-01-26 9:30 ` Peter Zijlstra
0 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2021-01-26 9:30 UTC (permalink / raw)
To: Davidlohr Bueso
Cc: linux-kernel, walken, mingo, tglx, oleg, irogers, juri.lelli,
vincent.guittot
On Mon, Jan 25, 2021 at 09:59:49PM -0800, Davidlohr Bueso wrote:
> On Mon, 25 Jan 2021, Peter Zijlstra wrote:
>
> > Reduce rbtree boilerplate by using the new helpers.
>
> This reminds me of:
>
> https://lore.kernel.org/lkml/20200207180305.11092-6-dave@stgolabs.net/
>
> Would a walk optimization (list+rbtree) serve here? Not sure how big
> the uprobes_trees gets.
Oh, that's patch set is horrible.. You can do a linked list with the
unused node pointers directly.
https://en.wikipedia.org/wiki/Threaded_binary_tree
So instead of using NULL for the empty rb_{left,right} pointers, use
pointers with the LSB set to differentiate them from regular leaf
pointers.
Other than that, threaded rb-trees would be awesome. I've meant to
implement them a number of times but never had the time to do the
tree-wide clean up of the rb_tree API to enable them.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2021-01-26 23:41 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-25 15:09 [PATCH v2 0/7] Generic RB-tree helpers Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 1/7] rbtree: Add generic add and find helpers Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 2/7] rbtree, sched/fair: Use rb_add_cached() Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 3/7] rbtree, sched/deadline: " Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 4/7] rbtree, perf: Use new rbtree helpers Peter Zijlstra
2021-01-25 15:20 ` Tejun Heo
2021-01-25 15:09 ` [PATCH v2 5/7] rbtree, uprobes: Use " Peter Zijlstra
2021-01-26 5:59 ` Davidlohr Bueso
2021-01-26 9:30 ` Peter Zijlstra
2021-01-25 15:09 ` [PATCH v2 6/7] rbtree, rtmutex: Use rb_add_cached() Peter Zijlstra
2021-01-25 15:10 ` [PATCH v2 7/7] rbtree, timerqueue: " Peter Zijlstra
2021-01-26 6:04 ` [PATCH v2 0/7] Generic RB-tree helpers Davidlohr Bueso
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).