All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/10] Clean ups for maple tree
@ 2023-05-22  5:06 Peng Zhang
  2023-05-22  5:06 ` [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}() Peng Zhang
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

0006-0009 - Make mas_wr_modify() and its subfunctions clearer. Make the
            store operation of maple tree more efficient in some cases.

Changes since v2:
 - Change the error detection method. [01/10]
 - Add blank line after some if statement.
 - Fine tune the code of mas_is_span_wr() [04/10]
 - Remove unreachable blocks of code and make
   changes to the relevant code. [07/10] [08/10]

v1: https://lore.kernel.org/lkml/20230515131757.60035-1-zhangpeng.00@bytedance.com/
v2: https://lore.kernel.org/lkml/20230517085809.86522-1-zhangpeng.00@bytedance.com/

Peng Zhang (10):
  maple_tree: Rework mtree_alloc_{range,rrange}()
  maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap()
  maple_tree: Fix the arguments to __must_hold()
  maple_tree: Simplify mas_is_span_wr()
  maple_tree: Make the code symmetrical in mas_wr_extend_null()
  maple_tree: Add mas_wr_new_end() to calculate new_end accurately
  maple_tree: Add comments and some minor cleanups to mas_wr_append()
  maple_tree: Rework mas_wr_slot_store() to be cleaner and more
    efficient.
  maple_tree: Simplify and clean up mas_wr_node_store()
  maple_tree: Relocate the declaration of mas_empty_area_rev().

 include/linux/maple_tree.h |  12 +-
 lib/maple_tree.c           | 451 +++++++++++++------------------------
 2 files changed, 157 insertions(+), 306 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:52   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 02/10] maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap() Peng Zhang
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Use mas_empty_area{_rev}() to refactor mtree_alloc_{range,rrange}()

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 57 +++++++++++++++++++++++++++---------------------
 1 file changed, 32 insertions(+), 25 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 4eb220008f72..96d102d60b4e 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6493,31 +6493,33 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
 {
 	int ret = 0;
 
-	MA_STATE(mas, mt, min, min);
+	MA_STATE(mas, mt, 0, 0);
 	if (!mt_is_alloc(mt))
 		return -EINVAL;
 
 	if (WARN_ON_ONCE(mt_is_reserved(entry)))
 		return -EINVAL;
 
-	if (min > max)
-		return -EINVAL;
-
-	if (max < size)
-		return -EINVAL;
-
-	if (!size)
-		return -EINVAL;
-
 	mtree_lock(mt);
 retry:
-	mas.offset = 0;
-	mas.index = min;
-	mas.last = max - size + 1;
-	ret = mas_alloc(&mas, entry, size, startp);
+	ret = mas_empty_area(&mas, min, max, size);
+	if (ret)
+		goto unlock;
+
+	mas_insert(&mas, entry);
+	/*
+	 * mas_nomem() may release the lock, causing the allocated area
+	 * to be unavailable, so try to allocate a free area again.
+	 */
 	if (mas_nomem(&mas, gfp))
 		goto retry;
 
+	if (mas_is_err(&mas))
+		ret = xa_err(mas.node);
+	else
+		*startp = mas.index;
+
+unlock:
 	mtree_unlock(mt);
 	return ret;
 }
@@ -6529,28 +6531,33 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
 {
 	int ret = 0;
 
-	MA_STATE(mas, mt, min, max - size + 1);
+	MA_STATE(mas, mt, 0, 0);
 	if (!mt_is_alloc(mt))
 		return -EINVAL;
 
 	if (WARN_ON_ONCE(mt_is_reserved(entry)))
 		return -EINVAL;
 
-	if (min > max)
-		return -EINVAL;
-
-	if (max < size - 1)
-		return -EINVAL;
-
-	if (!size)
-		return -EINVAL;
-
 	mtree_lock(mt);
 retry:
-	ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
+	ret = mas_empty_area_rev(&mas, min, max, size);
+	if (ret)
+		goto unlock;
+
+	mas_insert(&mas, entry);
+	/*
+	 * mas_nomem() may release the lock, causing the allocated area
+	 * to be unavailable, so try to allocate a free area again.
+	 */
 	if (mas_nomem(&mas, gfp))
 		goto retry;
 
+	if (mas_is_err(&mas))
+		ret = xa_err(mas.node);
+	else
+		*startp = mas.index;
+
+unlock:
 	mtree_unlock(mt);
 	return ret;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 02/10] maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
  2023-05-22  5:06 ` [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:53   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 03/10] maple_tree: Fix the arguments to __must_hold() Peng Zhang
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

mas_{rev_}alloc() and mas_fill_gap() are useless, delete them.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 108 -----------------------------------------------
 1 file changed, 108 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 96d102d60b4e..263bd0ccc31b 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5120,46 +5120,6 @@ static inline void mas_awalk(struct ma_state *mas, unsigned long size)
 	}
 }
 
-/*
- * mas_fill_gap() - Fill a located gap with @entry.
- * @mas: The maple state
- * @entry: The value to store
- * @slot: The offset into the node to store the @entry
- * @size: The size of the entry
- * @index: The start location
- */
-static inline void mas_fill_gap(struct ma_state *mas, void *entry,
-		unsigned char slot, unsigned long size, unsigned long *index)
-{
-	MA_WR_STATE(wr_mas, mas, entry);
-	unsigned char pslot = mte_parent_slot(mas->node);
-	struct maple_enode *mn = mas->node;
-	unsigned long *pivots;
-	enum maple_type ptype;
-	/*
-	 * mas->index is the start address for the search
-	 *  which may no longer be needed.
-	 * mas->last is the end address for the search
-	 */
-
-	*index = mas->index;
-	mas->last = mas->index + size - 1;
-
-	/*
-	 * It is possible that using mas->max and mas->min to correctly
-	 * calculate the index and last will cause an issue in the gap
-	 * calculation, so fix the ma_state here
-	 */
-	mas_ascend(mas);
-	ptype = mte_node_type(mas->node);
-	pivots = ma_pivots(mas_mn(mas), ptype);
-	mas->max = mas_safe_pivot(mas, pivots, pslot, ptype);
-	mas->min = mas_safe_min(mas, pivots, pslot);
-	mas->node = mn;
-	mas->offset = slot;
-	mas_wr_store_entry(&wr_mas);
-}
-
 /*
  * mas_sparse_area() - Internal function.  Return upper or lower limit when
  * searching for a gap in an empty tree.
@@ -5307,74 +5267,6 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
 }
 EXPORT_SYMBOL_GPL(mas_empty_area_rev);
 
-static inline int mas_alloc(struct ma_state *mas, void *entry,
-		unsigned long size, unsigned long *index)
-{
-	unsigned long min;
-
-	mas_start(mas);
-	if (mas_is_none(mas) || mas_is_ptr(mas)) {
-		mas_root_expand(mas, entry);
-		if (mas_is_err(mas))
-			return xa_err(mas->node);
-
-		if (!mas->index)
-			return mas_pivot(mas, 0);
-		return mas_pivot(mas, 1);
-	}
-
-	/* Must be walking a tree. */
-	mas_awalk(mas, size);
-	if (mas_is_err(mas))
-		return xa_err(mas->node);
-
-	if (mas->offset == MAPLE_NODE_SLOTS)
-		goto no_gap;
-
-	/*
-	 * At this point, mas->node points to the right node and we have an
-	 * offset that has a sufficient gap.
-	 */
-	min = mas->min;
-	if (mas->offset)
-		min = mas_pivot(mas, mas->offset - 1) + 1;
-
-	if (mas_is_err(mas))
-		return xa_err(mas->node);
-
-	if (mas->index < min)
-		mas->index = min;
-
-	mas_fill_gap(mas, entry, mas->offset, size, index);
-	return 0;
-
-no_gap:
-	return -EBUSY;
-}
-
-static inline int mas_rev_alloc(struct ma_state *mas, unsigned long min,
-				unsigned long max, void *entry,
-				unsigned long size, unsigned long *index)
-{
-	int ret = 0;
-
-	ret = mas_empty_area_rev(mas, min, max, size);
-	if (ret)
-		return ret;
-
-	if (mas_is_err(mas))
-		return xa_err(mas->node);
-
-	if (mas->offset == MAPLE_NODE_SLOTS)
-		goto no_gap;
-
-	mas_fill_gap(mas, entry, mas->offset, size, index);
-	return 0;
-
-no_gap:
-	return -EBUSY;
-}
-
 /*
  * mte_dead_leaves() - Mark all leaves of a node as dead.
  * @mas: The maple state
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 03/10] maple_tree: Fix the arguments to __must_hold()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
  2023-05-22  5:06 ` [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}() Peng Zhang
  2023-05-22  5:06 ` [PATCH v3 02/10] maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:54   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 04/10] maple_tree: Simplify mas_is_span_wr() Peng Zhang
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Fix the arguments to __must_hold() to make sparse work.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 263bd0ccc31b..3fe6a6685384 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1752,7 +1752,7 @@ static inline void mas_adopt_children(struct ma_state *mas,
  * leave the node (true) and handle the adoption and free elsewhere.
  */
 static inline void mas_replace(struct ma_state *mas, bool advanced)
-	__must_hold(mas->tree->lock)
+	__must_hold(mas->tree->ma_lock)
 {
 	struct maple_node *mn = mas_mn(mas);
 	struct maple_enode *old_enode;
@@ -1792,7 +1792,7 @@ static inline void mas_replace(struct ma_state *mas, bool advanced)
  * @child: the maple state to store the child.
  */
 static inline bool mas_new_child(struct ma_state *mas, struct ma_state *child)
-	__must_hold(mas->tree->lock)
+	__must_hold(mas->tree->ma_lock)
 {
 	enum maple_type mt;
 	unsigned char offset;
@@ -6204,7 +6204,7 @@ EXPORT_SYMBOL_GPL(mas_erase);
  * Return: true on allocation, false otherwise.
  */
 bool mas_nomem(struct ma_state *mas, gfp_t gfp)
-	__must_hold(mas->tree->lock)
+	__must_hold(mas->tree->ma_lock)
 {
 	if (likely(mas->node != MA_ERROR(-ENOMEM))) {
 		mas_destroy(mas);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 04/10] maple_tree: Simplify mas_is_span_wr()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (2 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 03/10] maple_tree: Fix the arguments to __must_hold() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:53   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 05/10] maple_tree: Make the code symmetrical in mas_wr_extend_null() Peng Zhang
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Make the code for detecting spanning writes more concise.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 35 ++++++++++++-----------------------
 1 file changed, 12 insertions(+), 23 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 3fe6a6685384..c47af84047a4 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3728,43 +3728,32 @@ static inline void mas_store_root(struct ma_state *mas, void *entry)
  */
 static bool mas_is_span_wr(struct ma_wr_state *wr_mas)
 {
-	unsigned long max;
+	unsigned long max = wr_mas->r_max;
 	unsigned long last = wr_mas->mas->last;
-	unsigned long piv = wr_mas->r_max;
 	enum maple_type type = wr_mas->type;
 	void *entry = wr_mas->entry;
 
-	/* Contained in this pivot */
-	if (piv > last)
+	/* Contained in this pivot, fast path */
+	if (last < max)
 		return false;
 
-	max = wr_mas->mas->max;
-	if (unlikely(ma_is_leaf(type))) {
-		/* Fits in the node, but may span slots. */
+	if (ma_is_leaf(type)) {
+		max = wr_mas->mas->max;
 		if (last < max)
 			return false;
+	}
 
-		/* Writes to the end of the node but not null. */
-		if ((last == max) && entry)
-			return false;
-
+	if (last == max) {
 		/*
-		 * Writing ULONG_MAX is not a spanning write regardless of the
-		 * value being written as long as the range fits in the node.
+		 * The last entry of leaf node cannot be NULL unless it is the
+		 * rightmost node (writing ULONG_MAX), otherwise it spans slots.
+		 * If this is not leaf node, detect spanning store wr walk.
 		 */
-		if ((last == ULONG_MAX) && (last == max))
-			return false;
-	} else if (piv == last) {
-		if (entry)
-			return false;
-
-		/* Detect spanning store wr walk */
-		if (last == ULONG_MAX)
+		if (entry || last == ULONG_MAX)
 			return false;
 	}
 
-	trace_ma_write(__func__, wr_mas->mas, piv, entry);
-
+	trace_ma_write(__func__, wr_mas->mas, wr_mas->r_max, entry);
 	return true;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 05/10] maple_tree: Make the code symmetrical in mas_wr_extend_null()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (3 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 04/10] maple_tree: Simplify mas_is_span_wr() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-22  5:06 ` [PATCH v3 06/10] maple_tree: Add mas_wr_new_end() to calculate new_end accurately Peng Zhang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Just make the code symmetrical to improve readability.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
---
 lib/maple_tree.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index c47af84047a4..562507979a4b 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4265,19 +4265,21 @@ static inline void mas_wr_extend_null(struct ma_wr_state *wr_mas)
 {
 	struct ma_state *mas = wr_mas->mas;
 
-	if (mas->last < wr_mas->end_piv && !wr_mas->slots[wr_mas->offset_end])
+	if (!wr_mas->slots[wr_mas->offset_end]) {
+		/* If this one is null, the next and prev are not */
 		mas->last = wr_mas->end_piv;
-
-	/* Check next slot(s) if we are overwriting the end */
-	if ((mas->last == wr_mas->end_piv) &&
-	    (wr_mas->node_end != wr_mas->offset_end) &&
-	    !wr_mas->slots[wr_mas->offset_end + 1]) {
-		wr_mas->offset_end++;
-		if (wr_mas->offset_end == wr_mas->node_end)
-			mas->last = mas->max;
-		else
-			mas->last = wr_mas->pivots[wr_mas->offset_end];
-		wr_mas->end_piv = mas->last;
+	} else {
+		/* Check next slot(s) if we are overwriting the end */
+		if ((mas->last == wr_mas->end_piv) &&
+		    (wr_mas->node_end != wr_mas->offset_end) &&
+		    !wr_mas->slots[wr_mas->offset_end + 1]) {
+			wr_mas->offset_end++;
+			if (wr_mas->offset_end == wr_mas->node_end)
+				mas->last = mas->max;
+			else
+				mas->last = wr_mas->pivots[wr_mas->offset_end];
+			wr_mas->end_piv = mas->last;
+		}
 	}
 
 	if (!wr_mas->content) {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 06/10] maple_tree: Add mas_wr_new_end() to calculate new_end accurately
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (4 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 05/10] maple_tree: Make the code symmetrical in mas_wr_extend_null() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:54   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 07/10] maple_tree: Add comments and some minor cleanups to mas_wr_append() Peng Zhang
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

The previous new_end calculation is inaccurate, because it assumes that
two new pivots must be added (this is inaccurate), and sometimes it will
miss the fast path and enter the slow path. Add mas_wr_new_end() to
accurately calculate new_end to make the conditions for entering the
fast path more accurate.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 34 +++++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 562507979a4b..0550a07355d7 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4297,6 +4297,21 @@ static inline void mas_wr_extend_null(struct ma_wr_state *wr_mas)
 	}
 }
 
+static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
+{
+	struct ma_state *mas = wr_mas->mas;
+	unsigned char new_end = wr_mas->node_end + 2;
+
+	new_end -= wr_mas->offset_end - mas->offset;
+	if (wr_mas->r_min == mas->index)
+		new_end--;
+
+	if (wr_mas->end_piv == mas->last)
+		new_end--;
+
+	return new_end;
+}
+
 static inline bool mas_wr_append(struct ma_wr_state *wr_mas)
 {
 	unsigned char end = wr_mas->node_end;
@@ -4352,9 +4367,8 @@ static void mas_wr_bnode(struct ma_wr_state *wr_mas)
 
 static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
 {
-	unsigned char node_slots;
-	unsigned char node_size;
 	struct ma_state *mas = wr_mas->mas;
+	unsigned char new_end;
 
 	/* Direct replacement */
 	if (wr_mas->r_min == mas->index && wr_mas->r_max == mas->last) {
@@ -4364,17 +4378,15 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
 		return;
 	}
 
-	/* Attempt to append */
-	node_slots = mt_slots[wr_mas->type];
-	node_size = wr_mas->node_end - wr_mas->offset_end + mas->offset + 2;
-	if (mas->max == ULONG_MAX)
-		node_size++;
-
-	/* slot and node store will not fit, go to the slow path */
-	if (unlikely(node_size >= node_slots))
+	/*
+	 * new_end exceeds the size of the maple node and cannot enter the fast
+	 * path.
+	 */
+	new_end = mas_wr_new_end(wr_mas);
+	if (new_end >= mt_slots[wr_mas->type])
 		goto slow_path;
 
-	if (wr_mas->entry && (wr_mas->node_end < node_slots - 1) &&
+	if (wr_mas->entry && (wr_mas->node_end < mt_slots[wr_mas->type] - 1) &&
 	    (mas->offset == wr_mas->node_end) && mas_wr_append(wr_mas)) {
 		if (!wr_mas->content || !wr_mas->entry)
 			mas_update_gap(mas);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 07/10] maple_tree: Add comments and some minor cleanups to mas_wr_append()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (5 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 06/10] maple_tree: Add mas_wr_new_end() to calculate new_end accurately Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:55   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 08/10] maple_tree: Rework mas_wr_slot_store() to be cleaner and more efficient Peng Zhang
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Add comment for mas_wr_append(), move mas_update_gap() into
mas_wr_append(), and other cleanups to make mas_wr_modify() cleaner.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 47 +++++++++++++++++++++++------------------------
 1 file changed, 23 insertions(+), 24 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 0550a07355d7..afbfdcdde5db 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4312,6 +4312,12 @@ static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
 	return new_end;
 }
 
+/*
+ * mas_wr_append: Attempt to append
+ * @wr_mas: the maple write state
+ *
+ * Return: True if appended, false otherwise
+ */
 static inline bool mas_wr_append(struct ma_wr_state *wr_mas)
 {
 	unsigned char end = wr_mas->node_end;
@@ -4319,34 +4325,30 @@ static inline bool mas_wr_append(struct ma_wr_state *wr_mas)
 	struct ma_state *mas = wr_mas->mas;
 	unsigned char node_pivots = mt_pivots[wr_mas->type];
 
-	if ((mas->index != wr_mas->r_min) && (mas->last == wr_mas->r_max)) {
-		if (new_end < node_pivots)
-			wr_mas->pivots[new_end] = wr_mas->pivots[end];
+	if (mas->offset != wr_mas->node_end)
+		return false;
 
-		if (new_end < node_pivots)
-			ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end);
+	if (new_end < node_pivots) {
+		wr_mas->pivots[new_end] = wr_mas->pivots[end];
+		ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end);
+	}
 
+	if (mas->last == wr_mas->r_max) {
+		/* Append to end of range */
 		rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->entry);
-		mas->offset = new_end;
 		wr_mas->pivots[end] = mas->index - 1;
-
-		return true;
-	}
-
-	if ((mas->index == wr_mas->r_min) && (mas->last < wr_mas->r_max)) {
-		if (new_end < node_pivots)
-			wr_mas->pivots[new_end] = wr_mas->pivots[end];
-
+		mas->offset = new_end;
+	} else {
+		/* Append to start of range */
 		rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->content);
-		if (new_end < node_pivots)
-			ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end);
-
 		wr_mas->pivots[end] = mas->last;
 		rcu_assign_pointer(wr_mas->slots[end], wr_mas->entry);
-		return true;
 	}
 
-	return false;
+	if (!wr_mas->content || !wr_mas->entry)
+		mas_update_gap(mas);
+
+	return  true;
 }
 
 /*
@@ -4386,12 +4388,9 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
 	if (new_end >= mt_slots[wr_mas->type])
 		goto slow_path;
 
-	if (wr_mas->entry && (wr_mas->node_end < mt_slots[wr_mas->type] - 1) &&
-	    (mas->offset == wr_mas->node_end) && mas_wr_append(wr_mas)) {
-		if (!wr_mas->content || !wr_mas->entry)
-			mas_update_gap(mas);
+	/* Attempt to append */
+	if (new_end == wr_mas->node_end + 1 && mas_wr_append(wr_mas))
 		return;
-	}
 
 	if ((wr_mas->offset_end - mas->offset <= 1) && mas_wr_slot_store(wr_mas))
 		return;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 08/10] maple_tree: Rework mas_wr_slot_store() to be cleaner and more efficient.
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (6 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 07/10] maple_tree: Add comments and some minor cleanups to mas_wr_append() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:59   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 09/10] maple_tree: Simplify and clean up mas_wr_node_store() Peng Zhang
  2023-05-22  5:06 ` [PATCH v3 10/10] maple_tree: Relocate the declaration of mas_empty_area_rev() Peng Zhang
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

The code of mas_wr_slot_store() is messy, make it clearer and concise,
and add comments. In addition, get whether the two gaps are empty to
avoid calling mas_update_gap() all the time.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 53 +++++++++++++++++-------------------------------
 1 file changed, 19 insertions(+), 34 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index afbfdcdde5db..1fc872f7683c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4203,49 +4203,34 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas)
 static inline bool mas_wr_slot_store(struct ma_wr_state *wr_mas)
 {
 	struct ma_state *mas = wr_mas->mas;
-	unsigned long lmax; /* Logical max. */
 	unsigned char offset = mas->offset;
+	bool gap = false;
 
-	if ((wr_mas->r_max > mas->last) && ((wr_mas->r_min != mas->index) ||
-				  (offset != wr_mas->node_end)))
+	if (wr_mas->offset_end - offset != 1)
 		return false;
 
-	if (offset == wr_mas->node_end - 1)
-		lmax = mas->max;
-	else
-		lmax = wr_mas->pivots[offset + 1];
-
-	/* going to overwrite too many slots. */
-	if (lmax < mas->last)
-		return false;
-
-	if (wr_mas->r_min == mas->index) {
-		/* overwriting two or more ranges with one. */
-		if (lmax == mas->last)
-			return false;
+	gap |= !mt_slot_locked(mas->tree, wr_mas->slots, offset);
+	gap |= !mt_slot_locked(mas->tree, wr_mas->slots, offset + 1);
 
-		/* Overwriting all of offset and a portion of offset + 1. */
+	if (mas->index == wr_mas->r_min) {
+		/* Overwriting the range and over a part of the next range. */
 		rcu_assign_pointer(wr_mas->slots[offset], wr_mas->entry);
 		wr_mas->pivots[offset] = mas->last;
-		goto done;
+	} else {
+		/* Overwriting a part of the range and over the next range */
+		rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry);
+		wr_mas->pivots[offset] = mas->index - 1;
+		mas->offset++; /* Keep mas accurate. */
 	}
 
-	/* Doesn't end on the next range end. */
-	if (lmax != mas->last)
-		return false;
-
-	/* Overwriting a portion of offset and all of offset + 1 */
-	if ((offset + 1 < mt_pivots[wr_mas->type]) &&
-	    (wr_mas->entry || wr_mas->pivots[offset + 1]))
-		wr_mas->pivots[offset + 1] = mas->last;
-
-	rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry);
-	wr_mas->pivots[offset] = mas->index - 1;
-	mas->offset++; /* Keep mas accurate. */
-
-done:
 	trace_ma_write(__func__, mas, 0, wr_mas->entry);
-	mas_update_gap(mas);
+	/*
+	 * Only update gap when the new entry is empty or there is an empty
+	 * entry in the original two ranges.
+	 */
+	if (!wr_mas->entry || gap)
+		mas_update_gap(mas);
+
 	return true;
 }
 
@@ -4392,7 +4377,7 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
 	if (new_end == wr_mas->node_end + 1 && mas_wr_append(wr_mas))
 		return;
 
-	if ((wr_mas->offset_end - mas->offset <= 1) && mas_wr_slot_store(wr_mas))
+	if (new_end == wr_mas->node_end && mas_wr_slot_store(wr_mas))
 		return;
 	else if (mas_wr_node_store(wr_mas))
 		return;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 09/10] maple_tree: Simplify and clean up mas_wr_node_store()
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (7 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 08/10] maple_tree: Rework mas_wr_slot_store() to be cleaner and more efficient Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 18:00   ` Liam R. Howlett
  2023-05-22  5:06 ` [PATCH v3 10/10] maple_tree: Relocate the declaration of mas_empty_area_rev() Peng Zhang
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Simplify and clean up mas_wr_node_store(), remove unnecessary code.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 87 +++++++++++++++---------------------------------
 1 file changed, 26 insertions(+), 61 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 1fc872f7683c..aa1472c45757 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4075,52 +4075,27 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
  *
  * Return: True if stored, false otherwise
  */
-static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas)
+static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
+				     unsigned char new_end)
 {
 	struct ma_state *mas = wr_mas->mas;
 	void __rcu **dst_slots;
 	unsigned long *dst_pivots;
-	unsigned char dst_offset;
-	unsigned char new_end = wr_mas->node_end;
-	unsigned char offset;
-	unsigned char node_slots = mt_slots[wr_mas->type];
+	unsigned char dst_offset, offset_end = wr_mas->offset_end;
 	struct maple_node reuse, *newnode;
-	unsigned char copy_size, max_piv = mt_pivots[wr_mas->type];
+	unsigned char copy_size, node_pivots = mt_pivots[wr_mas->type];
 	bool in_rcu = mt_in_rcu(mas->tree);
 
-	offset = mas->offset;
-	if (mas->last == wr_mas->r_max) {
-		/* runs right to the end of the node */
-		if (mas->last == mas->max)
-			new_end = offset;
-		/* don't copy this offset */
-		wr_mas->offset_end++;
-	} else if (mas->last < wr_mas->r_max) {
-		/* new range ends in this range */
-		if (unlikely(wr_mas->r_max == ULONG_MAX))
-			mas_bulk_rebalance(mas, wr_mas->node_end, wr_mas->type);
-
-		new_end++;
-	} else {
-		if (wr_mas->end_piv == mas->last)
-			wr_mas->offset_end++;
-
-		new_end -= wr_mas->offset_end - offset - 1;
-	}
-
-	/* new range starts within a range */
-	if (wr_mas->r_min < mas->index)
-		new_end++;
-
-	/* Not enough room */
-	if (new_end >= node_slots)
-		return false;
-
-	/* Not enough data. */
+	/* Check if there is enough data. The room is enough. */
 	if (!mte_is_root(mas->node) && (new_end <= mt_min_slots[wr_mas->type]) &&
 	    !(mas->mas_flags & MA_STATE_BULK))
 		return false;
 
+	if (mas->last == wr_mas->end_piv)
+		offset_end++; /* don't copy this offset */
+	else if (unlikely(wr_mas->r_max == ULONG_MAX))
+		mas_bulk_rebalance(mas, wr_mas->node_end, wr_mas->type);
+
 	/* set up node. */
 	if (in_rcu) {
 		mas_node_count(mas, 1);
@@ -4137,47 +4112,36 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas)
 	dst_pivots = ma_pivots(newnode, wr_mas->type);
 	dst_slots = ma_slots(newnode, wr_mas->type);
 	/* Copy from start to insert point */
-	memcpy(dst_pivots, wr_mas->pivots, sizeof(unsigned long) * (offset + 1));
-	memcpy(dst_slots, wr_mas->slots, sizeof(void *) * (offset + 1));
-	dst_offset = offset;
+	memcpy(dst_pivots, wr_mas->pivots, sizeof(unsigned long) * mas->offset);
+	memcpy(dst_slots, wr_mas->slots, sizeof(void *) * mas->offset);
 
 	/* Handle insert of new range starting after old range */
 	if (wr_mas->r_min < mas->index) {
-		mas->offset++;
-		rcu_assign_pointer(dst_slots[dst_offset], wr_mas->content);
-		dst_pivots[dst_offset++] = mas->index - 1;
+		rcu_assign_pointer(dst_slots[mas->offset], wr_mas->content);
+		dst_pivots[mas->offset++] = mas->index - 1;
 	}
 
 	/* Store the new entry and range end. */
-	if (dst_offset < max_piv)
-		dst_pivots[dst_offset] = mas->last;
-	mas->offset = dst_offset;
-	rcu_assign_pointer(dst_slots[dst_offset], wr_mas->entry);
+	if (mas->offset < node_pivots)
+		dst_pivots[mas->offset] = mas->last;
+	rcu_assign_pointer(dst_slots[mas->offset], wr_mas->entry);
 
 	/*
 	 * this range wrote to the end of the node or it overwrote the rest of
 	 * the data
 	 */
-	if (wr_mas->offset_end > wr_mas->node_end || mas->last >= mas->max) {
-		new_end = dst_offset;
+	if (offset_end > wr_mas->node_end)
 		goto done;
-	}
 
-	dst_offset++;
+	dst_offset = mas->offset + 1;
 	/* Copy to the end of node if necessary. */
-	copy_size = wr_mas->node_end - wr_mas->offset_end + 1;
-	memcpy(dst_slots + dst_offset, wr_mas->slots + wr_mas->offset_end,
+	copy_size = wr_mas->node_end - offset_end + 1;
+	memcpy(dst_slots + dst_offset, wr_mas->slots + offset_end,
 	       sizeof(void *) * copy_size);
-	if (dst_offset < max_piv) {
-		if (copy_size > max_piv - dst_offset)
-			copy_size = max_piv - dst_offset;
-
-		memcpy(dst_pivots + dst_offset,
-		       wr_mas->pivots + wr_mas->offset_end,
-		       sizeof(unsigned long) * copy_size);
-	}
+	memcpy(dst_pivots + dst_offset, wr_mas->pivots + offset_end,
+	       sizeof(unsigned long) * (copy_size - 1));
 
-	if ((wr_mas->node_end == node_slots - 1) && (new_end < node_slots - 1))
+	if (new_end < node_pivots)
 		dst_pivots[new_end] = mas->max;
 
 done:
@@ -4379,7 +4343,8 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
 
 	if (new_end == wr_mas->node_end && mas_wr_slot_store(wr_mas))
 		return;
-	else if (mas_wr_node_store(wr_mas))
+
+	if (mas_wr_node_store(wr_mas, new_end))
 		return;
 
 	if (mas_is_err(mas))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 10/10] maple_tree: Relocate the declaration of mas_empty_area_rev().
  2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
                   ` (8 preceding siblings ...)
  2023-05-22  5:06 ` [PATCH v3 09/10] maple_tree: Simplify and clean up mas_wr_node_store() Peng Zhang
@ 2023-05-22  5:06 ` Peng Zhang
  2023-05-23 17:59   ` Liam R. Howlett
  9 siblings, 1 reply; 20+ messages in thread
From: Peng Zhang @ 2023-05-22  5:06 UTC (permalink / raw)
  To: Liam.Howlett; +Cc: akpm, linux-mm, linux-kernel, maple-tree, Peng Zhang

Relocate the declaration of mas_empty_area_rev() so that
mas_empty_area() and mas_empty_area_rev() are together.

Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 include/linux/maple_tree.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 85559a34a098..c4681cb8414f 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -474,6 +474,12 @@ void *mas_next_range(struct ma_state *mas, unsigned long max);
 
 int mas_empty_area(struct ma_state *mas, unsigned long min, unsigned long max,
 		   unsigned long size);
+/*
+ * This finds an empty area from the highest address to the lowest.
+ * AKA "Topdown" version,
+ */
+int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
+		       unsigned long max, unsigned long size);
 
 static inline void mas_init(struct ma_state *mas, struct maple_tree *tree,
 			    unsigned long addr)
@@ -497,12 +503,6 @@ static inline bool mas_is_paused(const struct ma_state *mas)
 	return mas->node == MAS_PAUSE;
 }
 
-/*
- * This finds an empty area from the highest address to the lowest.
- * AKA "Topdown" version,
- */
-int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
-		       unsigned long max, unsigned long size);
 /**
  * mas_reset() - Reset a Maple Tree operation state.
  * @mas: Maple Tree operation state.
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}()
  2023-05-22  5:06 ` [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}() Peng Zhang
@ 2023-05-23 17:52   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:52 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> Use mas_empty_area{_rev}() to refactor mtree_alloc_{range,rrange}()

A bit of a short change log, but looks good.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
> ---
>  lib/maple_tree.c | 57 +++++++++++++++++++++++++++---------------------
>  1 file changed, 32 insertions(+), 25 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 4eb220008f72..96d102d60b4e 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -6493,31 +6493,33 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
>  {
>  	int ret = 0;
>  
> -	MA_STATE(mas, mt, min, min);
> +	MA_STATE(mas, mt, 0, 0);
>  	if (!mt_is_alloc(mt))
>  		return -EINVAL;
>  
>  	if (WARN_ON_ONCE(mt_is_reserved(entry)))
>  		return -EINVAL;
>  
> -	if (min > max)
> -		return -EINVAL;
> -
> -	if (max < size)
> -		return -EINVAL;
> -
> -	if (!size)
> -		return -EINVAL;
> -
>  	mtree_lock(mt);
>  retry:
> -	mas.offset = 0;
> -	mas.index = min;
> -	mas.last = max - size + 1;
> -	ret = mas_alloc(&mas, entry, size, startp);
> +	ret = mas_empty_area(&mas, min, max, size);
> +	if (ret)
> +		goto unlock;
> +
> +	mas_insert(&mas, entry);
> +	/*
> +	 * mas_nomem() may release the lock, causing the allocated area
> +	 * to be unavailable, so try to allocate a free area again.
> +	 */
>  	if (mas_nomem(&mas, gfp))
>  		goto retry;
>  
> +	if (mas_is_err(&mas))
> +		ret = xa_err(mas.node);
> +	else
> +		*startp = mas.index;
> +
> +unlock:
>  	mtree_unlock(mt);
>  	return ret;
>  }
> @@ -6529,28 +6531,33 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
>  {
>  	int ret = 0;
>  
> -	MA_STATE(mas, mt, min, max - size + 1);
> +	MA_STATE(mas, mt, 0, 0);
>  	if (!mt_is_alloc(mt))
>  		return -EINVAL;
>  
>  	if (WARN_ON_ONCE(mt_is_reserved(entry)))
>  		return -EINVAL;
>  
> -	if (min > max)
> -		return -EINVAL;
> -
> -	if (max < size - 1)
> -		return -EINVAL;
> -
> -	if (!size)
> -		return -EINVAL;
> -
>  	mtree_lock(mt);
>  retry:
> -	ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
> +	ret = mas_empty_area_rev(&mas, min, max, size);
> +	if (ret)
> +		goto unlock;
> +
> +	mas_insert(&mas, entry);
> +	/*
> +	 * mas_nomem() may release the lock, causing the allocated area
> +	 * to be unavailable, so try to allocate a free area again.
> +	 */
>  	if (mas_nomem(&mas, gfp))
>  		goto retry;
>  
> +	if (mas_is_err(&mas))
> +		ret = xa_err(mas.node);
> +	else
> +		*startp = mas.index;
> +
> +unlock:
>  	mtree_unlock(mt);
>  	return ret;
>  }
> -- 
> 2.20.1
> 
> 
> -- 
> maple-tree mailing list
> maple-tree@lists.infradead.org
> https://lists.infradead.org/mailman/listinfo/maple-tree

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 04/10] maple_tree: Simplify mas_is_span_wr()
  2023-05-22  5:06 ` [PATCH v3 04/10] maple_tree: Simplify mas_is_span_wr() Peng Zhang
@ 2023-05-23 17:53   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:53 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> Make the code for detecting spanning writes more concise.
> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
> ---
>  lib/maple_tree.c | 35 ++++++++++++-----------------------
>  1 file changed, 12 insertions(+), 23 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 3fe6a6685384..c47af84047a4 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -3728,43 +3728,32 @@ static inline void mas_store_root(struct ma_state *mas, void *entry)
>   */
>  static bool mas_is_span_wr(struct ma_wr_state *wr_mas)
>  {
> -	unsigned long max;
> +	unsigned long max = wr_mas->r_max;
>  	unsigned long last = wr_mas->mas->last;
> -	unsigned long piv = wr_mas->r_max;
>  	enum maple_type type = wr_mas->type;
>  	void *entry = wr_mas->entry;
>  
> -	/* Contained in this pivot */
> -	if (piv > last)
> +	/* Contained in this pivot, fast path */
> +	if (last < max)
>  		return false;
>  
> -	max = wr_mas->mas->max;
> -	if (unlikely(ma_is_leaf(type))) {
> -		/* Fits in the node, but may span slots. */
> +	if (ma_is_leaf(type)) {
> +		max = wr_mas->mas->max;
>  		if (last < max)
>  			return false;
> +	}
>  
> -		/* Writes to the end of the node but not null. */
> -		if ((last == max) && entry)
> -			return false;
> -
> +	if (last == max) {
>  		/*
> -		 * Writing ULONG_MAX is not a spanning write regardless of the
> -		 * value being written as long as the range fits in the node.
> +		 * The last entry of leaf node cannot be NULL unless it is the
> +		 * rightmost node (writing ULONG_MAX), otherwise it spans slots.
> +		 * If this is not leaf node, detect spanning store wr walk.

The last part of this comment should be dropped now.

This patch looks good besides that comment.

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

>  		 */
> -		if ((last == ULONG_MAX) && (last == max))
> -			return false;
> -	} else if (piv == last) {
> -		if (entry)
> -			return false;
> -
> -		/* Detect spanning store wr walk */
> -		if (last == ULONG_MAX)
> +		if (entry || last == ULONG_MAX)
>  			return false;
>  	}
>  
> -	trace_ma_write(__func__, wr_mas->mas, piv, entry);
> -
> +	trace_ma_write(__func__, wr_mas->mas, wr_mas->r_max, entry);
>  	return true;
>  }
>  
> -- 
> 2.20.1
> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 02/10] maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap()
  2023-05-22  5:06 ` [PATCH v3 02/10] maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap() Peng Zhang
@ 2023-05-23 17:53   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:53 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> mas_{rev_}alloc() and mas_fill_gap() are useless, delete them.

s/useless/no longer used/

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
> ---
>  lib/maple_tree.c | 108 -----------------------------------------------
>  1 file changed, 108 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 96d102d60b4e..263bd0ccc31b 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5120,46 +5120,6 @@ static inline void mas_awalk(struct ma_state *mas, unsigned long size)
>  	}
>  }
>  
> -/*
> - * mas_fill_gap() - Fill a located gap with @entry.
> - * @mas: The maple state
> - * @entry: The value to store
> - * @slot: The offset into the node to store the @entry
> - * @size: The size of the entry
> - * @index: The start location
> - */
> -static inline void mas_fill_gap(struct ma_state *mas, void *entry,
> -		unsigned char slot, unsigned long size, unsigned long *index)
> -{
> -	MA_WR_STATE(wr_mas, mas, entry);
> -	unsigned char pslot = mte_parent_slot(mas->node);
> -	struct maple_enode *mn = mas->node;
> -	unsigned long *pivots;
> -	enum maple_type ptype;
> -	/*
> -	 * mas->index is the start address for the search
> -	 *  which may no longer be needed.
> -	 * mas->last is the end address for the search
> -	 */
> -
> -	*index = mas->index;
> -	mas->last = mas->index + size - 1;
> -
> -	/*
> -	 * It is possible that using mas->max and mas->min to correctly
> -	 * calculate the index and last will cause an issue in the gap
> -	 * calculation, so fix the ma_state here
> -	 */
> -	mas_ascend(mas);
> -	ptype = mte_node_type(mas->node);
> -	pivots = ma_pivots(mas_mn(mas), ptype);
> -	mas->max = mas_safe_pivot(mas, pivots, pslot, ptype);
> -	mas->min = mas_safe_min(mas, pivots, pslot);
> -	mas->node = mn;
> -	mas->offset = slot;
> -	mas_wr_store_entry(&wr_mas);
> -}
> -
>  /*
>   * mas_sparse_area() - Internal function.  Return upper or lower limit when
>   * searching for a gap in an empty tree.
> @@ -5307,74 +5267,6 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
>  }
>  EXPORT_SYMBOL_GPL(mas_empty_area_rev);
>  
> -static inline int mas_alloc(struct ma_state *mas, void *entry,
> -		unsigned long size, unsigned long *index)
> -{
> -	unsigned long min;
> -
> -	mas_start(mas);
> -	if (mas_is_none(mas) || mas_is_ptr(mas)) {
> -		mas_root_expand(mas, entry);
> -		if (mas_is_err(mas))
> -			return xa_err(mas->node);
> -
> -		if (!mas->index)
> -			return mas_pivot(mas, 0);
> -		return mas_pivot(mas, 1);
> -	}
> -
> -	/* Must be walking a tree. */
> -	mas_awalk(mas, size);
> -	if (mas_is_err(mas))
> -		return xa_err(mas->node);
> -
> -	if (mas->offset == MAPLE_NODE_SLOTS)
> -		goto no_gap;
> -
> -	/*
> -	 * At this point, mas->node points to the right node and we have an
> -	 * offset that has a sufficient gap.
> -	 */
> -	min = mas->min;
> -	if (mas->offset)
> -		min = mas_pivot(mas, mas->offset - 1) + 1;
> -
> -	if (mas_is_err(mas))
> -		return xa_err(mas->node);
> -
> -	if (mas->index < min)
> -		mas->index = min;
> -
> -	mas_fill_gap(mas, entry, mas->offset, size, index);
> -	return 0;
> -
> -no_gap:
> -	return -EBUSY;
> -}
> -
> -static inline int mas_rev_alloc(struct ma_state *mas, unsigned long min,
> -				unsigned long max, void *entry,
> -				unsigned long size, unsigned long *index)
> -{
> -	int ret = 0;
> -
> -	ret = mas_empty_area_rev(mas, min, max, size);
> -	if (ret)
> -		return ret;
> -
> -	if (mas_is_err(mas))
> -		return xa_err(mas->node);
> -
> -	if (mas->offset == MAPLE_NODE_SLOTS)
> -		goto no_gap;
> -
> -	mas_fill_gap(mas, entry, mas->offset, size, index);
> -	return 0;
> -
> -no_gap:
> -	return -EBUSY;
> -}
> -
>  /*
>   * mte_dead_leaves() - Mark all leaves of a node as dead.
>   * @mas: The maple state
> -- 
> 2.20.1
> 
> 
> -- 
> maple-tree mailing list
> maple-tree@lists.infradead.org
> https://lists.infradead.org/mailman/listinfo/maple-tree

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 03/10] maple_tree: Fix the arguments to __must_hold()
  2023-05-22  5:06 ` [PATCH v3 03/10] maple_tree: Fix the arguments to __must_hold() Peng Zhang
@ 2023-05-23 17:54   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:54 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> Fix the arguments to __must_hold() to make sparse work.
> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  lib/maple_tree.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 263bd0ccc31b..3fe6a6685384 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -1752,7 +1752,7 @@ static inline void mas_adopt_children(struct ma_state *mas,
>   * leave the node (true) and handle the adoption and free elsewhere.
>   */
>  static inline void mas_replace(struct ma_state *mas, bool advanced)
> -	__must_hold(mas->tree->lock)
> +	__must_hold(mas->tree->ma_lock)
>  {
>  	struct maple_node *mn = mas_mn(mas);
>  	struct maple_enode *old_enode;
> @@ -1792,7 +1792,7 @@ static inline void mas_replace(struct ma_state *mas, bool advanced)
>   * @child: the maple state to store the child.
>   */
>  static inline bool mas_new_child(struct ma_state *mas, struct ma_state *child)
> -	__must_hold(mas->tree->lock)
> +	__must_hold(mas->tree->ma_lock)
>  {
>  	enum maple_type mt;
>  	unsigned char offset;
> @@ -6204,7 +6204,7 @@ EXPORT_SYMBOL_GPL(mas_erase);
>   * Return: true on allocation, false otherwise.
>   */
>  bool mas_nomem(struct ma_state *mas, gfp_t gfp)
> -	__must_hold(mas->tree->lock)
> +	__must_hold(mas->tree->ma_lock)
>  {
>  	if (likely(mas->node != MA_ERROR(-ENOMEM))) {
>  		mas_destroy(mas);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 06/10] maple_tree: Add mas_wr_new_end() to calculate new_end accurately
  2023-05-22  5:06 ` [PATCH v3 06/10] maple_tree: Add mas_wr_new_end() to calculate new_end accurately Peng Zhang
@ 2023-05-23 17:54   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:54 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> The previous new_end calculation is inaccurate, because it assumes that
> two new pivots must be added (this is inaccurate), and sometimes it will
> miss the fast path and enter the slow path. Add mas_wr_new_end() to
> accurately calculate new_end to make the conditions for entering the
> fast path more accurate.
> 

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
> ---
>  lib/maple_tree.c | 34 +++++++++++++++++++++++-----------
>  1 file changed, 23 insertions(+), 11 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 562507979a4b..0550a07355d7 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4297,6 +4297,21 @@ static inline void mas_wr_extend_null(struct ma_wr_state *wr_mas)
>  	}
>  }
>  
> +static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
> +{
> +	struct ma_state *mas = wr_mas->mas;
> +	unsigned char new_end = wr_mas->node_end + 2;
> +
> +	new_end -= wr_mas->offset_end - mas->offset;
> +	if (wr_mas->r_min == mas->index)
> +		new_end--;
> +
> +	if (wr_mas->end_piv == mas->last)
> +		new_end--;
> +
> +	return new_end;
> +}
> +
>  static inline bool mas_wr_append(struct ma_wr_state *wr_mas)
>  {
>  	unsigned char end = wr_mas->node_end;
> @@ -4352,9 +4367,8 @@ static void mas_wr_bnode(struct ma_wr_state *wr_mas)
>  
>  static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
>  {
> -	unsigned char node_slots;
> -	unsigned char node_size;
>  	struct ma_state *mas = wr_mas->mas;
> +	unsigned char new_end;
>  
>  	/* Direct replacement */
>  	if (wr_mas->r_min == mas->index && wr_mas->r_max == mas->last) {
> @@ -4364,17 +4378,15 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
>  		return;
>  	}
>  
> -	/* Attempt to append */
> -	node_slots = mt_slots[wr_mas->type];
> -	node_size = wr_mas->node_end - wr_mas->offset_end + mas->offset + 2;
> -	if (mas->max == ULONG_MAX)
> -		node_size++;
> -
> -	/* slot and node store will not fit, go to the slow path */
> -	if (unlikely(node_size >= node_slots))
> +	/*
> +	 * new_end exceeds the size of the maple node and cannot enter the fast
> +	 * path.
> +	 */
> +	new_end = mas_wr_new_end(wr_mas);
> +	if (new_end >= mt_slots[wr_mas->type])
>  		goto slow_path;
>  
> -	if (wr_mas->entry && (wr_mas->node_end < node_slots - 1) &&
> +	if (wr_mas->entry && (wr_mas->node_end < mt_slots[wr_mas->type] - 1) &&
>  	    (mas->offset == wr_mas->node_end) && mas_wr_append(wr_mas)) {
>  		if (!wr_mas->content || !wr_mas->entry)
>  			mas_update_gap(mas);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 07/10] maple_tree: Add comments and some minor cleanups to mas_wr_append()
  2023-05-22  5:06 ` [PATCH v3 07/10] maple_tree: Add comments and some minor cleanups to mas_wr_append() Peng Zhang
@ 2023-05-23 17:55   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:55 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> Add comment for mas_wr_append(), move mas_update_gap() into
> mas_wr_append(), and other cleanups to make mas_wr_modify() cleaner.
> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  lib/maple_tree.c | 47 +++++++++++++++++++++++------------------------
>  1 file changed, 23 insertions(+), 24 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 0550a07355d7..afbfdcdde5db 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4312,6 +4312,12 @@ static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
>  	return new_end;
>  }
>  
> +/*
> + * mas_wr_append: Attempt to append
> + * @wr_mas: the maple write state
> + *
> + * Return: True if appended, false otherwise
> + */
>  static inline bool mas_wr_append(struct ma_wr_state *wr_mas)
>  {
>  	unsigned char end = wr_mas->node_end;
> @@ -4319,34 +4325,30 @@ static inline bool mas_wr_append(struct ma_wr_state *wr_mas)
>  	struct ma_state *mas = wr_mas->mas;
>  	unsigned char node_pivots = mt_pivots[wr_mas->type];
>  
> -	if ((mas->index != wr_mas->r_min) && (mas->last == wr_mas->r_max)) {
> -		if (new_end < node_pivots)
> -			wr_mas->pivots[new_end] = wr_mas->pivots[end];
> +	if (mas->offset != wr_mas->node_end)
> +		return false;
>  
> -		if (new_end < node_pivots)
> -			ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end);
> +	if (new_end < node_pivots) {
> +		wr_mas->pivots[new_end] = wr_mas->pivots[end];
> +		ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end);
> +	}
>  
> +	if (mas->last == wr_mas->r_max) {
> +		/* Append to end of range */
>  		rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->entry);
> -		mas->offset = new_end;
>  		wr_mas->pivots[end] = mas->index - 1;
> -
> -		return true;
> -	}
> -
> -	if ((mas->index == wr_mas->r_min) && (mas->last < wr_mas->r_max)) {
> -		if (new_end < node_pivots)
> -			wr_mas->pivots[new_end] = wr_mas->pivots[end];
> -
> +		mas->offset = new_end;
> +	} else {
> +		/* Append to start of range */
>  		rcu_assign_pointer(wr_mas->slots[new_end], wr_mas->content);
> -		if (new_end < node_pivots)
> -			ma_set_meta(wr_mas->node, maple_leaf_64, 0, new_end);
> -
>  		wr_mas->pivots[end] = mas->last;
>  		rcu_assign_pointer(wr_mas->slots[end], wr_mas->entry);
> -		return true;
>  	}
>  
> -	return false;
> +	if (!wr_mas->content || !wr_mas->entry)
> +		mas_update_gap(mas);
> +
> +	return  true;
>  }
>  
>  /*
> @@ -4386,12 +4388,9 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
>  	if (new_end >= mt_slots[wr_mas->type])
>  		goto slow_path;
>  
> -	if (wr_mas->entry && (wr_mas->node_end < mt_slots[wr_mas->type] - 1) &&
> -	    (mas->offset == wr_mas->node_end) && mas_wr_append(wr_mas)) {
> -		if (!wr_mas->content || !wr_mas->entry)
> -			mas_update_gap(mas);
> +	/* Attempt to append */
> +	if (new_end == wr_mas->node_end + 1 && mas_wr_append(wr_mas))
>  		return;
> -	}
>  
>  	if ((wr_mas->offset_end - mas->offset <= 1) && mas_wr_slot_store(wr_mas))
>  		return;
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 08/10] maple_tree: Rework mas_wr_slot_store() to be cleaner and more efficient.
  2023-05-22  5:06 ` [PATCH v3 08/10] maple_tree: Rework mas_wr_slot_store() to be cleaner and more efficient Peng Zhang
@ 2023-05-23 17:59   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:59 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> The code of mas_wr_slot_store() is messy, make it clearer and concise,
> and add comments. In addition, get whether the two gaps are empty to
> avoid calling mas_update_gap() all the time.

Try to concentrate on what you did in the log:  Clean up the code and
add comments, but the best part of this patch is the last "in addition"
part, you should start your log with that.

> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  lib/maple_tree.c | 53 +++++++++++++++++-------------------------------
>  1 file changed, 19 insertions(+), 34 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index afbfdcdde5db..1fc872f7683c 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4203,49 +4203,34 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas)
>  static inline bool mas_wr_slot_store(struct ma_wr_state *wr_mas)
>  {
>  	struct ma_state *mas = wr_mas->mas;
> -	unsigned long lmax; /* Logical max. */
>  	unsigned char offset = mas->offset;
> +	bool gap = false;
>  
> -	if ((wr_mas->r_max > mas->last) && ((wr_mas->r_min != mas->index) ||
> -				  (offset != wr_mas->node_end)))
> +	if (wr_mas->offset_end - offset != 1)
>  		return false;
>  
> -	if (offset == wr_mas->node_end - 1)
> -		lmax = mas->max;
> -	else
> -		lmax = wr_mas->pivots[offset + 1];
> -
> -	/* going to overwrite too many slots. */
> -	if (lmax < mas->last)
> -		return false;
> -
> -	if (wr_mas->r_min == mas->index) {
> -		/* overwriting two or more ranges with one. */
> -		if (lmax == mas->last)
> -			return false;
> +	gap |= !mt_slot_locked(mas->tree, wr_mas->slots, offset);
> +	gap |= !mt_slot_locked(mas->tree, wr_mas->slots, offset + 1);
>  
> -		/* Overwriting all of offset and a portion of offset + 1. */
> +	if (mas->index == wr_mas->r_min) {
> +		/* Overwriting the range and over a part of the next range. */
>  		rcu_assign_pointer(wr_mas->slots[offset], wr_mas->entry);
>  		wr_mas->pivots[offset] = mas->last;
> -		goto done;
> +	} else {
> +		/* Overwriting a part of the range and over the next range */
> +		rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry);
> +		wr_mas->pivots[offset] = mas->index - 1;
> +		mas->offset++; /* Keep mas accurate. */
>  	}
>  
> -	/* Doesn't end on the next range end. */
> -	if (lmax != mas->last)
> -		return false;
> -
> -	/* Overwriting a portion of offset and all of offset + 1 */
> -	if ((offset + 1 < mt_pivots[wr_mas->type]) &&
> -	    (wr_mas->entry || wr_mas->pivots[offset + 1]))
> -		wr_mas->pivots[offset + 1] = mas->last;
> -
> -	rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry);
> -	wr_mas->pivots[offset] = mas->index - 1;
> -	mas->offset++; /* Keep mas accurate. */
> -
> -done:
>  	trace_ma_write(__func__, mas, 0, wr_mas->entry);
> -	mas_update_gap(mas);
> +	/*
> +	 * Only update gap when the new entry is empty or there is an empty
> +	 * entry in the original two ranges.
> +	 */
> +	if (!wr_mas->entry || gap)
> +		mas_update_gap(mas);
> +
>  	return true;
>  }
>  
> @@ -4392,7 +4377,7 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
>  	if (new_end == wr_mas->node_end + 1 && mas_wr_append(wr_mas))
>  		return;
>  
> -	if ((wr_mas->offset_end - mas->offset <= 1) && mas_wr_slot_store(wr_mas))
> +	if (new_end == wr_mas->node_end && mas_wr_slot_store(wr_mas))
>  		return;
>  	else if (mas_wr_node_store(wr_mas))
>  		return;
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 10/10] maple_tree: Relocate the declaration of mas_empty_area_rev().
  2023-05-22  5:06 ` [PATCH v3 10/10] maple_tree: Relocate the declaration of mas_empty_area_rev() Peng Zhang
@ 2023-05-23 17:59   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 17:59 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> Relocate the declaration of mas_empty_area_rev() so that
> mas_empty_area() and mas_empty_area_rev() are together.
> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  include/linux/maple_tree.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
> index 85559a34a098..c4681cb8414f 100644
> --- a/include/linux/maple_tree.h
> +++ b/include/linux/maple_tree.h
> @@ -474,6 +474,12 @@ void *mas_next_range(struct ma_state *mas, unsigned long max);
>  
>  int mas_empty_area(struct ma_state *mas, unsigned long min, unsigned long max,
>  		   unsigned long size);
> +/*
> + * This finds an empty area from the highest address to the lowest.
> + * AKA "Topdown" version,
> + */
> +int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
> +		       unsigned long max, unsigned long size);
>  
>  static inline void mas_init(struct ma_state *mas, struct maple_tree *tree,
>  			    unsigned long addr)
> @@ -497,12 +503,6 @@ static inline bool mas_is_paused(const struct ma_state *mas)
>  	return mas->node == MAS_PAUSE;
>  }
>  
> -/*
> - * This finds an empty area from the highest address to the lowest.
> - * AKA "Topdown" version,
> - */
> -int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
> -		       unsigned long max, unsigned long size);
>  /**
>   * mas_reset() - Reset a Maple Tree operation state.
>   * @mas: Maple Tree operation state.
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 09/10] maple_tree: Simplify and clean up mas_wr_node_store()
  2023-05-22  5:06 ` [PATCH v3 09/10] maple_tree: Simplify and clean up mas_wr_node_store() Peng Zhang
@ 2023-05-23 18:00   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2023-05-23 18:00 UTC (permalink / raw)
  To: Peng Zhang; +Cc: akpm, linux-mm, linux-kernel, maple-tree

* Peng Zhang <zhangpeng.00@bytedance.com> [230522 01:07]:
> Simplify and clean up mas_wr_node_store(), remove unnecessary code.
> 
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  lib/maple_tree.c | 87 +++++++++++++++---------------------------------
>  1 file changed, 26 insertions(+), 61 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 1fc872f7683c..aa1472c45757 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4075,52 +4075,27 @@ static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
>   *
>   * Return: True if stored, false otherwise
>   */
> -static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas)
> +static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,
> +				     unsigned char new_end)
>  {
>  	struct ma_state *mas = wr_mas->mas;
>  	void __rcu **dst_slots;
>  	unsigned long *dst_pivots;
> -	unsigned char dst_offset;
> -	unsigned char new_end = wr_mas->node_end;
> -	unsigned char offset;
> -	unsigned char node_slots = mt_slots[wr_mas->type];
> +	unsigned char dst_offset, offset_end = wr_mas->offset_end;
>  	struct maple_node reuse, *newnode;
> -	unsigned char copy_size, max_piv = mt_pivots[wr_mas->type];
> +	unsigned char copy_size, node_pivots = mt_pivots[wr_mas->type];
>  	bool in_rcu = mt_in_rcu(mas->tree);
>  
> -	offset = mas->offset;
> -	if (mas->last == wr_mas->r_max) {
> -		/* runs right to the end of the node */
> -		if (mas->last == mas->max)
> -			new_end = offset;
> -		/* don't copy this offset */
> -		wr_mas->offset_end++;
> -	} else if (mas->last < wr_mas->r_max) {
> -		/* new range ends in this range */
> -		if (unlikely(wr_mas->r_max == ULONG_MAX))
> -			mas_bulk_rebalance(mas, wr_mas->node_end, wr_mas->type);
> -
> -		new_end++;
> -	} else {
> -		if (wr_mas->end_piv == mas->last)
> -			wr_mas->offset_end++;
> -
> -		new_end -= wr_mas->offset_end - offset - 1;
> -	}
> -
> -	/* new range starts within a range */
> -	if (wr_mas->r_min < mas->index)
> -		new_end++;
> -
> -	/* Not enough room */
> -	if (new_end >= node_slots)
> -		return false;
> -
> -	/* Not enough data. */
> +	/* Check if there is enough data. The room is enough. */
>  	if (!mte_is_root(mas->node) && (new_end <= mt_min_slots[wr_mas->type]) &&
>  	    !(mas->mas_flags & MA_STATE_BULK))
>  		return false;
>  
> +	if (mas->last == wr_mas->end_piv)
> +		offset_end++; /* don't copy this offset */
> +	else if (unlikely(wr_mas->r_max == ULONG_MAX))
> +		mas_bulk_rebalance(mas, wr_mas->node_end, wr_mas->type);
> +
>  	/* set up node. */
>  	if (in_rcu) {
>  		mas_node_count(mas, 1);
> @@ -4137,47 +4112,36 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas)
>  	dst_pivots = ma_pivots(newnode, wr_mas->type);
>  	dst_slots = ma_slots(newnode, wr_mas->type);
>  	/* Copy from start to insert point */
> -	memcpy(dst_pivots, wr_mas->pivots, sizeof(unsigned long) * (offset + 1));
> -	memcpy(dst_slots, wr_mas->slots, sizeof(void *) * (offset + 1));
> -	dst_offset = offset;
> +	memcpy(dst_pivots, wr_mas->pivots, sizeof(unsigned long) * mas->offset);
> +	memcpy(dst_slots, wr_mas->slots, sizeof(void *) * mas->offset);
>  
>  	/* Handle insert of new range starting after old range */
>  	if (wr_mas->r_min < mas->index) {
> -		mas->offset++;
> -		rcu_assign_pointer(dst_slots[dst_offset], wr_mas->content);
> -		dst_pivots[dst_offset++] = mas->index - 1;
> +		rcu_assign_pointer(dst_slots[mas->offset], wr_mas->content);
> +		dst_pivots[mas->offset++] = mas->index - 1;
>  	}
>  
>  	/* Store the new entry and range end. */
> -	if (dst_offset < max_piv)
> -		dst_pivots[dst_offset] = mas->last;
> -	mas->offset = dst_offset;
> -	rcu_assign_pointer(dst_slots[dst_offset], wr_mas->entry);
> +	if (mas->offset < node_pivots)
> +		dst_pivots[mas->offset] = mas->last;
> +	rcu_assign_pointer(dst_slots[mas->offset], wr_mas->entry);
>  
>  	/*
>  	 * this range wrote to the end of the node or it overwrote the rest of
>  	 * the data
>  	 */
> -	if (wr_mas->offset_end > wr_mas->node_end || mas->last >= mas->max) {
> -		new_end = dst_offset;
> +	if (offset_end > wr_mas->node_end)
>  		goto done;
> -	}
>  
> -	dst_offset++;
> +	dst_offset = mas->offset + 1;
>  	/* Copy to the end of node if necessary. */
> -	copy_size = wr_mas->node_end - wr_mas->offset_end + 1;
> -	memcpy(dst_slots + dst_offset, wr_mas->slots + wr_mas->offset_end,
> +	copy_size = wr_mas->node_end - offset_end + 1;
> +	memcpy(dst_slots + dst_offset, wr_mas->slots + offset_end,
>  	       sizeof(void *) * copy_size);
> -	if (dst_offset < max_piv) {
> -		if (copy_size > max_piv - dst_offset)
> -			copy_size = max_piv - dst_offset;
> -
> -		memcpy(dst_pivots + dst_offset,
> -		       wr_mas->pivots + wr_mas->offset_end,
> -		       sizeof(unsigned long) * copy_size);
> -	}
> +	memcpy(dst_pivots + dst_offset, wr_mas->pivots + offset_end,
> +	       sizeof(unsigned long) * (copy_size - 1));
>  
> -	if ((wr_mas->node_end == node_slots - 1) && (new_end < node_slots - 1))
> +	if (new_end < node_pivots)
>  		dst_pivots[new_end] = mas->max;
>  
>  done:
> @@ -4379,7 +4343,8 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
>  
>  	if (new_end == wr_mas->node_end && mas_wr_slot_store(wr_mas))
>  		return;
> -	else if (mas_wr_node_store(wr_mas))
> +
> +	if (mas_wr_node_store(wr_mas, new_end))
>  		return;
>  
>  	if (mas_is_err(mas))
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2023-05-23 18:00 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-22  5:06 [PATCH v3 00/10] Clean ups for maple tree Peng Zhang
2023-05-22  5:06 ` [PATCH v3 01/10] maple_tree: Rework mtree_alloc_{range,rrange}() Peng Zhang
2023-05-23 17:52   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 02/10] maple_tree: Drop mas_{rev_}alloc() and mas_fill_gap() Peng Zhang
2023-05-23 17:53   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 03/10] maple_tree: Fix the arguments to __must_hold() Peng Zhang
2023-05-23 17:54   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 04/10] maple_tree: Simplify mas_is_span_wr() Peng Zhang
2023-05-23 17:53   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 05/10] maple_tree: Make the code symmetrical in mas_wr_extend_null() Peng Zhang
2023-05-22  5:06 ` [PATCH v3 06/10] maple_tree: Add mas_wr_new_end() to calculate new_end accurately Peng Zhang
2023-05-23 17:54   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 07/10] maple_tree: Add comments and some minor cleanups to mas_wr_append() Peng Zhang
2023-05-23 17:55   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 08/10] maple_tree: Rework mas_wr_slot_store() to be cleaner and more efficient Peng Zhang
2023-05-23 17:59   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 09/10] maple_tree: Simplify and clean up mas_wr_node_store() Peng Zhang
2023-05-23 18:00   ` Liam R. Howlett
2023-05-22  5:06 ` [PATCH v3 10/10] maple_tree: Relocate the declaration of mas_empty_area_rev() Peng Zhang
2023-05-23 17:59   ` Liam R. Howlett

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.