linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/23 v2] mm: Speedup page cache truncation
@ 2020-04-22 15:02 Jan Kara
  2020-04-22 15:02 ` [PATCH 01/23] xarray: Remove stale comment Jan Kara
                   ` (22 more replies)
  0 siblings, 23 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Hello,

this is a second version of my patches to avoid clearing marks from xas_store()
and thus fix regression in page cache truncation.

Changes since v1
- rebased on 5.7-rc2
- drop xas_for_each_marked() fix as it was already merged
- reworked the whole series based on Matthew's feedback - we now create new
  function xas_store_noinit() and use it instead of changing xas_store()
  behavior. Note that for xas_store_range() and __xa_cmpxchg() I didn't bother
  to change names although they stop clearing marks as well. This is because
  there are only very few callers so it's easy to verify them, also chances of
  a clash with other patch introducing new callers are very small.

Original motivation:

Conversion of page cache to xarray (commit 69b6c1319b6 "mm: Convert truncate to
XArray" in particular) has regressed performance of page cache truncation
by about 10% (see my original report here [1]). This patch series aims at
improving the truncation to get some of that regression back.

The first patch fixes a long standing bug with xas_for_each_marked() that I've
uncovered when debugging my patches. The remaining patches then work towards
the ability to stop clearing marks in xas_store() which improves truncation
performance by about 6%.

The patches have passed radix_tree tests in tools/testing and also fstests runs
for ext4 & xfs.

								Honza

[1] https://lore.kernel.org/linux-mm/20190226165628.GB24711@quack2.suse.cz

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/23] xarray: Remove stale comment
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 02/23] xarray: Provide xas_erase() and xas_store_noinit() helpers Jan Kara
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Since commit 7e934cf5ace1 "xarray: Fix early termination of
xas_for_each_marked" the comment is no longer relevant since
xas_for_each_marked() can cope with marked NULL entries. Remove the
comment.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/xarray.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/lib/xarray.c b/lib/xarray.c
index e9e641d3c0c3..dae68dd13a02 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -803,13 +803,6 @@ void *xas_store(struct xa_state *xas, void *entry)
 		xas_init_marks(xas);
 
 	for (;;) {
-		/*
-		 * Must clear the marks before setting the entry to NULL,
-		 * otherwise xas_for_each_marked may find a NULL entry and
-		 * stop early.  rcu_assign_pointer contains a release barrier
-		 * so the mark clearing will appear to happen before the
-		 * entry is set to NULL.
-		 */
 		rcu_assign_pointer(*slot, entry);
 		if (xa_is_node(next) && (!node || node->shift))
 			xas_free_nodes(xas, xa_to_node(next));
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/23] xarray: Provide xas_erase() and xas_store_noinit() helpers
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
  2020-04-22 15:02 ` [PATCH 01/23] xarray: Remove stale comment Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 03/23] xarray: Use xas_store_noinit() in __xa_store, __xa_insert, __xa_alloc Jan Kara
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Currently xas_store() clears marks when stored value is NULL. This is
somewhat counter-intuitive and also causes measurable performance impact
when mark clearing is not needed (e.g. because marks are already clear).
So provide xas_erase() helper (similarly to existing xa_erase()) which
stores NULL at given index and also takes care of clearing marks. We
also introduce xas_store_noinit() helper that works like xas_store()
except that it does not initialize marks (and thus has better
performance).  In the following patches, all callers of xas_store() will
be converted either to xas_erase() or xas_store_noinit().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 include/linux/xarray.h |  2 ++
 lib/xarray.c           | 59 +++++++++++++++++++++++++++++++++++++-------------
 2 files changed, 46 insertions(+), 15 deletions(-)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 14c893433139..06acef49ec95 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -1491,6 +1491,8 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
 
 void *xas_load(struct xa_state *);
 void *xas_store(struct xa_state *, void *entry);
+void *xas_store_noinit(struct xa_state *, void *entry);
+void *xas_erase(struct xa_state *);
 void *xas_find(struct xa_state *, unsigned long max);
 void *xas_find_conflict(struct xa_state *);
 
diff --git a/lib/xarray.c b/lib/xarray.c
index dae68dd13a02..ed98fc152b17 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -753,20 +753,7 @@ static void update_node(struct xa_state *xas, struct xa_node *node,
 		xas_delete_node(xas);
 }
 
-/**
- * xas_store() - Store this entry in the XArray.
- * @xas: XArray operation state.
- * @entry: New entry.
- *
- * If @xas is operating on a multi-index entry, the entry returned by this
- * function is essentially meaningless (it may be an internal entry or it
- * may be %NULL, even if there are non-NULL entries at some of the indices
- * covered by the range).  This is not a problem for any current users,
- * and can be changed if needed.
- *
- * Return: The old entry at this index.
- */
-void *xas_store(struct xa_state *xas, void *entry)
+static void *__xas_store(struct xa_state *xas, void *entry, bool init_marks)
 {
 	struct xa_node *node;
 	void __rcu **slot = &xas->xa->xa_head;
@@ -799,7 +786,7 @@ void *xas_store(struct xa_state *xas, void *entry)
 		if (xas->xa_sibs)
 			xas_squash_marks(xas);
 	}
-	if (!entry)
+	if (init_marks)
 		xas_init_marks(xas);
 
 	for (;;) {
@@ -831,8 +818,33 @@ void *xas_store(struct xa_state *xas, void *entry)
 	update_node(xas, node, count, values);
 	return first;
 }
+
+/**
+ * xas_store() - Store this entry in the XArray.
+ * @xas: XArray operation state.
+ * @entry: New entry.
+ *
+ * If @xas is operating on a multi-index entry, the entry returned by this
+ * function is essentially meaningless (it may be an internal entry or it
+ * may be %NULL, even if there are non-NULL entries at some of the indices
+ * covered by the range).  This is not a problem for any current users,
+ * and can be changed if needed.
+ *
+ * Return: The old entry at this index.
+ */
+void *xas_store(struct xa_state *xas, void *entry)
+{
+	return __xas_store(xas, entry, true);
+}
 EXPORT_SYMBOL_GPL(xas_store);
 
+/* This is like xas_store() but does not initialize marks on stored entry */
+void *xas_store_noinit(struct xa_state *xas, void *entry)
+{
+	return __xas_store(xas, entry, false);
+}
+EXPORT_SYMBOL_GPL(xas_store_noinit);
+
 /**
  * xas_get_mark() - Returns the state of this mark.
  * @xas: XArray operation state.
@@ -1314,6 +1326,23 @@ static void *xas_result(struct xa_state *xas, void *curr)
 	return curr;
 }
 
+/**
+ * xas_erase() - Erase this entry from the XArray
+ * @xas: XArray operation state.
+ *
+ * After this function returns, loading from @index will return %NULL. The
+ * function also clears all marks associated with the @index.  If the index is
+ * part of a multi-index entry, all indices will be erased and none of the
+ * entries will be part of a multi-index entry.
+ *
+ * Return: The entry which used to be at this index.
+ */
+void *xas_erase(struct xa_state *xas)
+{
+	return __xas_store(xas, NULL, true);
+}
+EXPORT_SYMBOL(xas_erase);
+
 /**
  * __xa_erase() - Erase this entry from the XArray while locked.
  * @xa: XArray.
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/23] xarray: Use xas_store_noinit() in __xa_store, __xa_insert, __xa_alloc
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
  2020-04-22 15:02 ` [PATCH 01/23] xarray: Remove stale comment Jan Kara
  2020-04-22 15:02 ` [PATCH 02/23] xarray: Provide xas_erase() and xas_store_noinit() helpers Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 04/23] xarray: Switch xa_store_range() to use xas_store_noinit() Jan Kara
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

These three functions never store NULL in the array so xas_store() is
equivalent to xas_store_noinit(). Replace it.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/xarray.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/xarray.c b/lib/xarray.c
index ed98fc152b17..2eb634e8bf15 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1079,7 +1079,7 @@ EXPORT_SYMBOL_GPL(__xas_next);
  *
  * If no entry is found and the array is smaller than @max, the iterator
  * is set to the smallest index not yet in the array.  This allows @xas
- * to be immediately passed to xas_store().
+ * to be immediately passed to xas_store_noinit().
  *
  * Return: The entry, if found, otherwise %NULL.
  */
@@ -1145,7 +1145,7 @@ EXPORT_SYMBOL_GPL(xas_find);
  * If no marked entry is found and the array is smaller than @max, @xas is
  * set to the bounds state and xas->xa_index is set to the smallest index
  * not yet in the array.  This allows @xas to be immediately passed to
- * xas_store().
+ * xas_store_noinit().
  *
  * If no entry is found before @max is reached, @xas is set to the restart
  * state.
@@ -1412,7 +1412,7 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
 		entry = XA_ZERO_ENTRY;
 
 	do {
-		curr = xas_store(&xas, entry);
+		curr = xas_store_noinit(&xas, entry);
 		if (xa_track_free(xa))
 			xas_clear_mark(&xas, XA_FREE_MARK);
 	} while (__xas_nomem(&xas, gfp));
@@ -1517,7 +1517,7 @@ int __xa_insert(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
 	do {
 		curr = xas_load(&xas);
 		if (!curr) {
-			xas_store(&xas, entry);
+			xas_store_noinit(&xas, entry);
 			if (xa_track_free(xa))
 				xas_clear_mark(&xas, XA_FREE_MARK);
 		} else {
@@ -1653,7 +1653,7 @@ int __xa_alloc(struct xarray *xa, u32 *id, void *entry,
 			xas_set_err(&xas, -EBUSY);
 		else
 			*id = xas.xa_index;
-		xas_store(&xas, entry);
+		xas_store_noinit(&xas, entry);
 		xas_clear_mark(&xas, XA_FREE_MARK);
 	} while (__xas_nomem(&xas, gfp));
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/23] xarray: Switch xa_store_range() to use xas_store_noinit()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (2 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 03/23] xarray: Use xas_store_noinit() in __xa_store, __xa_insert, __xa_alloc Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 05/23] xarray: Use xas_erase() in __xa_erase() Jan Kara
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

There is only a single user of xa_store_range() and that is
mm/memremap.c which does not use marks at all. Just switch
xa_store_range() to use xas_store_noinit() to avoid implicit (and
unnecessary for all users) initialization of xarray marks.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/xarray.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/xarray.c b/lib/xarray.c
index 2eb634e8bf15..49fafcee1c8e 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1602,7 +1602,7 @@ void *xa_store_range(struct xarray *xa, unsigned long first,
 		}
 		do {
 			xas_set_range(&xas, first, last);
-			xas_store(&xas, entry);
+			xas_store_noinit(&xas, entry);
 			if (xas_error(&xas))
 				goto unlock;
 			first += xas_size(&xas);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/23] xarray: Use xas_erase() in __xa_erase()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (3 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 04/23] xarray: Switch xa_store_range() to use xas_store_noinit() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 06/23] xarray: Explicitely set XA_FREE_MARK in __xa_cmpxchg() Jan Kara
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

In this case we want to erase element from the array. Use xas_erase()
for it.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/xarray.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/xarray.c b/lib/xarray.c
index 49fafcee1c8e..96be029412b2 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1358,7 +1358,7 @@ EXPORT_SYMBOL(xas_erase);
 void *__xa_erase(struct xarray *xa, unsigned long index)
 {
 	XA_STATE(xas, xa, index);
-	return xas_result(&xas, xas_store(&xas, NULL));
+	return xas_result(&xas, xas_erase(&xas));
 }
 EXPORT_SYMBOL(__xa_erase);
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/23] xarray: Explicitely set XA_FREE_MARK in __xa_cmpxchg()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (4 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 05/23] xarray: Use xas_erase() in __xa_erase() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 07/23] xarray: Switch __xa_cmpxchg() to use xas_store_noinit() Jan Kara
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

__xa_cmpxchg() relies on xas_store() to set XA_FREE_MARK when storing
NULL into xarray that has free tracking enabled. Make the setting of
XA_FREE_MARK explicit similarly as its clearing currently is.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/xarray.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/lib/xarray.c b/lib/xarray.c
index 96be029412b2..a372c59e3914 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1479,8 +1479,12 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
 		curr = xas_load(&xas);
 		if (curr == old) {
 			xas_store(&xas, entry);
-			if (xa_track_free(xa) && entry && !curr)
-				xas_clear_mark(&xas, XA_FREE_MARK);
+			if (xa_track_free(xa)) {
+				if (entry && !curr)
+					xas_clear_mark(&xas, XA_FREE_MARK);
+				else if (!entry && curr)
+					xas_set_mark(&xas, XA_FREE_MARK);
+			}
 		}
 	} while (__xas_nomem(&xas, gfp));
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/23] xarray: Switch __xa_cmpxchg() to use xas_store_noinit()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (5 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 06/23] xarray: Explicitely set XA_FREE_MARK in __xa_cmpxchg() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 08/23] dax: Use xas_erase() in __dax_invalidate_entry() Jan Kara
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Currently there are four places that end up calling into __xa_cmpxchg().
Two in drivers/infiniband/hw/mlx5/odp.c, one in fs/erofs/utils.c and one
in mm/shmem.c. The xarray in the first three places do not contain any
marks ever, the fourth place originally used radix_tree_delete_item()
which didn't touch marks either. So we are safe to switch __xa_cmpxchg()
to use xas_store_noinit(). Also document that __xa_cmpxchg() now does
not touch marks at all - i.e., it has ordinary store semantics without
specialcasing NULL value. If someone ever needs xa_cmpxchg() equivalent
that would really cause mark clearing on storing NULL, we can create
xa_erase_item() function.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/xarray.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/lib/xarray.c b/lib/xarray.c
index a372c59e3914..d87045d120ad 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1460,7 +1460,8 @@ EXPORT_SYMBOL(xa_store);
  *
  * You must already be holding the xa_lock when calling this function.
  * It will drop the lock if needed to allocate memory, and then reacquire
- * it afterwards.
+ * it afterwards. The call does not change any xarray marks except for
+ * XA_FREE_MARK if free tracking is enabled.
  *
  * Context: Any context.  Expects xa_lock to be held on entry.  May
  * release and reacquire xa_lock if @gfp flags permit.
@@ -1478,7 +1479,7 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
 	do {
 		curr = xas_load(&xas);
 		if (curr == old) {
-			xas_store(&xas, entry);
+			xas_store_noinit(&xas, entry);
 			if (xa_track_free(xa)) {
 				if (entry && !curr)
 					xas_clear_mark(&xas, XA_FREE_MARK);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/23] dax: Use xas_erase() in __dax_invalidate_entry()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (6 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 07/23] xarray: Switch __xa_cmpxchg() to use xas_store_noinit() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 09/23] dax: Use dax_store_noinit() in grab_mapping_entry() Jan Kara
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

When truncating DAX entry, we need to clear all the outstanding marks
for the entry. Use dax_erase() instead of dax_store().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index 11b16729b86f..f8358928549c 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -643,7 +643,7 @@ static int __dax_invalidate_entry(struct address_space *mapping,
 	     xas_get_mark(&xas, PAGECACHE_TAG_TOWRITE)))
 		goto out;
 	dax_disassociate_entry(entry, mapping, trunc);
-	xas_store(&xas, NULL);
+	xas_erase(&xas);
 	mapping->nrexceptional--;
 	ret = 1;
 out:
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/23] dax: Use dax_store_noinit() in grab_mapping_entry()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (7 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 08/23] dax: Use xas_erase() in __dax_invalidate_entry() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 10/23] dax: Convert xas_store() to xas_store_noinit() Jan Kara
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Although we store NULL here, there aren't any marks set on the entry
so use xas_store_noinit() to save some time when trying to reinitialize
marks. Furthermore we actually overwrite the entry shortly afterwards
with new PTE entry anyway.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index f8358928549c..03c6ca693f3c 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -523,7 +523,11 @@ static void *grab_mapping_entry(struct xa_state *xas,
 		}
 
 		dax_disassociate_entry(entry, mapping, false);
-		xas_store(xas, NULL);	/* undo the PMD join */
+		/*
+		 * Remove PMD entry. There should be no marks set for the entry
+ 		 * so don't bother reinitializing them with xas_erase().
+		 */
+		xas_store_noinit(xas, NULL);
 		dax_wake_entry(xas, entry, true);
 		mapping->nrexceptional--;
 		entry = NULL;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/23] dax: Convert xas_store() to xas_store_noinit()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (8 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 09/23] dax: Use dax_store_noinit() in grab_mapping_entry() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 11/23] mm: Use xas_erase() in page_cache_delete_batch() Jan Kara
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

All remaining users of xas_store() store non-NULL entries so xas_store()
and xas_store_noinit() are equivalent. Replace xas_store() with
xas_store_noinit().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 03c6ca693f3c..1c905830ee10 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -283,7 +283,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry)
 	BUG_ON(dax_is_locked(entry));
 	xas_reset(xas);
 	xas_lock_irq(xas);
-	old = xas_store(xas, entry);
+	old = xas_store_noinit(xas, entry);
 	xas_unlock_irq(xas);
 	BUG_ON(!dax_is_locked(old));
 	dax_wake_entry(xas, entry, false);
@@ -295,7 +295,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry)
 static void *dax_lock_entry(struct xa_state *xas, void *entry)
 {
 	unsigned long v = xa_to_value(entry);
-	return xas_store(xas, xa_mk_value(v | DAX_LOCKED));
+	return xas_store_noinit(xas, xa_mk_value(v | DAX_LOCKED));
 }
 
 static unsigned long dax_entry_size(void *entry)
@@ -923,7 +923,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 	 */
 	xas_reset(xas);
 	xas_lock_irq(xas);
-	xas_store(xas, entry);
+	xas_store_noinit(xas, entry);
 	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
 	dax_wake_entry(xas, entry, false);
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/23] mm: Use xas_erase() in page_cache_delete_batch()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (9 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 10/23] dax: Convert xas_store() to xas_store_noinit() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 12/23] mm: Use xas_erase() in collapse_file() Jan Kara
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

We need to clear marks when removing a page from xarray since there
could be DIRTY or TOWRITE tags left for the page. Use xas_erase() to
explicitely request mark clearing.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/filemap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 23a051a7ef0f..48c488b505ad 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -333,7 +333,7 @@ static void page_cache_delete_batch(struct address_space *mapping,
 		 */
 		if (page->index + compound_nr(page) - 1 == xas.xa_index)
 			i++;
-		xas_store(&xas, NULL);
+		xas_erase(&xas);
 		total_pages++;
 	}
 	mapping->nrpages -= total_pages;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 12/23] mm: Use xas_erase() in collapse_file()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (10 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 11/23] mm: Use xas_erase() in page_cache_delete_batch() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 13/23] mm: Use xas_store_noinit() when storing non-NULL Jan Kara
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

When undoing failed collapse of ordinary pages into a huge page, use
xas_erase() to explicitly clear any xarray marks that may have been
added to entries.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/khugepaged.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 99d77ffb79c2..8da820c02de7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1819,7 +1819,7 @@ static void collapse_file(struct mm_struct *mm,
 					break;
 				nr_none--;
 				/* Put holes back where they were */
-				xas_store(&xas, NULL);
+				xas_erase(&xas);
 				continue;
 			}
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 13/23] mm: Use xas_store_noinit() when storing non-NULL
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (11 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 12/23] mm: Use xas_erase() in collapse_file() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 14/23] workingset: Use xas_store_noinit() to clear shadow entry Jan Kara
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

When we store value different from NULL, xas_store_noinit() is
equivalent to xas_store(). Transition these places to
xas_store_noinit().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/filemap.c    | 6 +++---
 mm/khugepaged.c | 6 +++---
 mm/migrate.c    | 6 +++---
 mm/shmem.c      | 4 ++--
 mm/swap_state.c | 2 +-
 5 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 48c488b505ad..4fb515d8c242 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -134,7 +134,7 @@ static void page_cache_delete(struct address_space *mapping,
 	VM_BUG_ON_PAGE(PageTail(page), page);
 	VM_BUG_ON_PAGE(nr != 1 && shadow, page);
 
-	xas_store(&xas, shadow);
+	xas_store_noinit(&xas, shadow);
 	xas_init_marks(&xas);
 
 	page->mapping = NULL;
@@ -803,7 +803,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 	new->index = offset;
 
 	xas_lock_irqsave(&xas, flags);
-	xas_store(&xas, new);
+	xas_store_noinit(&xas, new);
 
 	old->mapping = NULL;
 	/* hugetlb pages do not participate in page cache accounting. */
@@ -856,7 +856,7 @@ static int __add_to_page_cache_locked(struct page *page,
 		old = xas_load(&xas);
 		if (old && !xa_is_value(old))
 			xas_set_err(&xas, -EEXIST);
-		xas_store(&xas, page);
+		xas_store_noinit(&xas, page);
 		if (xas_error(&xas))
 			goto unlock;
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8da820c02de7..79e5f0d12517 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1588,7 +1588,7 @@ static void collapse_file(struct mm_struct *mm,
 					result = SCAN_FAIL;
 					goto xa_locked;
 				}
-				xas_store(&xas, new_page);
+				xas_store_noinit(&xas, new_page);
 				nr_none++;
 				continue;
 			}
@@ -1724,7 +1724,7 @@ static void collapse_file(struct mm_struct *mm,
 		list_add_tail(&page->lru, &pagelist);
 
 		/* Finally, replace with the new page. */
-		xas_store(&xas, new_page);
+		xas_store_noinit(&xas, new_page);
 		continue;
 out_unlock:
 		unlock_page(page);
@@ -1828,7 +1828,7 @@ static void collapse_file(struct mm_struct *mm,
 			/* Unfreeze the page. */
 			list_del(&page->lru);
 			page_ref_unfreeze(page, 2);
-			xas_store(&xas, page);
+			xas_store_noinit(&xas, page);
 			xas_pause(&xas);
 			xas_unlock_irq(&xas);
 			unlock_page(page);
diff --git a/mm/migrate.c b/mm/migrate.c
index 7160c1556f79..1610b7336af8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -459,13 +459,13 @@ int migrate_page_move_mapping(struct address_space *mapping,
 		SetPageDirty(newpage);
 	}
 
-	xas_store(&xas, newpage);
+	xas_store_noinit(&xas, newpage);
 	if (PageTransHuge(page)) {
 		int i;
 
 		for (i = 1; i < HPAGE_PMD_NR; i++) {
 			xas_next(&xas);
-			xas_store(&xas, newpage);
+			xas_store_noinit(&xas, newpage);
 		}
 	}
 
@@ -536,7 +536,7 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 
 	get_page(newpage);
 
-	xas_store(&xas, newpage);
+	xas_store_noinit(&xas, newpage);
 
 	page_ref_unfreeze(page, expected_count - 1);
 
diff --git a/mm/shmem.c b/mm/shmem.c
index bd8840082c94..35a4abd9b417 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -361,7 +361,7 @@ static int shmem_replace_entry(struct address_space *mapping,
 	item = xas_load(&xas);
 	if (item != expected)
 		return -ENOENT;
-	xas_store(&xas, replacement);
+	xas_store_noinit(&xas, replacement);
 	return 0;
 }
 
@@ -631,7 +631,7 @@ static int shmem_add_to_page_cache(struct page *page,
 		if (xas_error(&xas))
 			goto unlock;
 next:
-		xas_store(&xas, page);
+		xas_store_noinit(&xas, page);
 		if (++i < nr) {
 			xas_next(&xas);
 			goto next;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index ebed37bbf7a3..1afbf68f1724 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -133,7 +133,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
 		for (i = 0; i < nr; i++) {
 			VM_BUG_ON_PAGE(xas.xa_index != idx + i, page);
 			set_page_private(page + i, entry.val + i);
-			xas_store(&xas, page);
+			xas_store_noinit(&xas, page);
 			xas_next(&xas);
 		}
 		address_space->nrpages += nr;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 14/23] workingset: Use xas_store_noinit() to clear shadow entry
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (12 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 13/23] mm: Use xas_store_noinit() when storing non-NULL Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 15/23] swap: Use xas_erase() when removing page from swap cache Jan Kara
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Shadow entries have no marks. Use xas_store_noinit() to clear the entry
so avoid unneeded initialization of the xarray marks. This provides a
nice boost to truncate numbers. Sample benchmark showing time to
truncate 128 files 1GB each on machine with 64GB of RAM (so about half
of entries are shadow entries):

         AVG      STDDEV
Vanilla  4.825s   0.036
Patched  4.516s   0.014

So we can see about 6% reduction in overall truncate time.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/truncate.c   | 2 +-
 mm/workingset.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/truncate.c b/mm/truncate.c
index dd9ebc1da356..baef636564cc 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -39,7 +39,7 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
 	xas_set_update(&xas, workingset_update_node);
 	if (xas_load(&xas) != entry)
 		return;
-	xas_store(&xas, NULL);
+	xas_store_noinit(&xas, NULL);
 	mapping->nrexceptional--;
 }
 
diff --git a/mm/workingset.c b/mm/workingset.c
index 474186b76ced..6a492140057f 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -537,7 +537,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
 	 * We could store a shadow entry here which was the minimum of the
 	 * shadow entries we were tracking ...
 	 */
-	xas_store(&xas, NULL);
+	xas_store_noinit(&xas, NULL);
 	__inc_lruvec_slab_state(node, WORKINGSET_NODERECLAIM);
 
 out_invalid:
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 15/23] swap: Use xas_erase() when removing page from swap cache
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (13 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 14/23] workingset: Use xas_store_noinit() to clear shadow entry Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 16/23] idr: Use xas_erase() in ida_destroy() Jan Kara
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Use xas_erase() to explicitely clear xarray marks when removing swap
cache pages from the i_mapping xarray.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/swap_state.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 1afbf68f1724..b5c8cbdcf8f0 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -167,7 +167,7 @@ void __delete_from_swap_cache(struct page *page, swp_entry_t entry)
 	VM_BUG_ON_PAGE(PageWriteback(page), page);
 
 	for (i = 0; i < nr; i++) {
-		void *entry = xas_store(&xas, NULL);
+		void *entry = xas_erase(&xas);
 		VM_BUG_ON_PAGE(entry != page, entry);
 		set_page_private(page + i, 0);
 		xas_next(&xas);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 16/23] idr: Use xas_erase() in ida_destroy()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (14 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 15/23] swap: Use xas_erase() when removing page from swap cache Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 17/23] idr: Use xas_erase() in ida_free() Jan Kara
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Explicitely clear marks (and set XA_MARK_FREE) in ida_destroy() by
calling xas_erase() instead of relying on xas_store() on implicitely
doing this.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/idr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/idr.c b/lib/idr.c
index c2cf2c52bbde..fd4877fef06d 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -543,7 +543,7 @@ void ida_destroy(struct ida *ida)
 	xas_for_each(&xas, bitmap, ULONG_MAX) {
 		if (!xa_is_value(bitmap))
 			kfree(bitmap);
-		xas_store(&xas, NULL);
+		xas_erase(&xas);
 	}
 	xas_unlock_irqrestore(&xas, flags);
 }
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 17/23] idr: Use xas_erase() in ida_free()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (15 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 16/23] idr: Use xas_erase() in ida_destroy() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 18/23] idr: Convert xas_store() to xas_store_noinit() Jan Kara
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Explicitely clear marks in ida_free() by calling xas_erase() instead of
relying on xas_store() on implicitely doing this.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/idr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/idr.c b/lib/idr.c
index fd4877fef06d..4ee06bc7208a 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -511,7 +511,7 @@ void ida_free(struct ida *ida, unsigned int id)
 		if (bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
 			kfree(bitmap);
 delete:
-			xas_store(&xas, NULL);
+			xas_erase(&xas);
 		}
 	}
 	xas_unlock_irqrestore(&xas, flags);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 18/23] idr: Convert xas_store() to xas_store_noinit()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (16 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 17/23] idr: Use xas_erase() in ida_free() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 19/23] testing: Use xas_erase() to remove entries from xarray Jan Kara
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

All remaining users of xas_store() store non-NULL entries so xas_store()
and xas_store_noinit() are equivalent. Replace xas_store() with
xas_store_noinit().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/idr.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/idr.c b/lib/idr.c
index 4ee06bc7208a..afd171077901 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -408,7 +408,7 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
 				goto nospc;
 			if (bit < BITS_PER_XA_VALUE) {
 				tmp |= 1UL << bit;
-				xas_store(&xas, xa_mk_value(tmp));
+				xas_store_noinit(&xas, xa_mk_value(tmp));
 				goto out;
 			}
 		}
@@ -418,7 +418,7 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
 		if (!bitmap)
 			goto alloc;
 		bitmap->bitmap[0] = tmp;
-		xas_store(&xas, bitmap);
+		xas_store_noinit(&xas, bitmap);
 		if (xas_error(&xas)) {
 			bitmap->bitmap[0] = 0;
 			goto out;
@@ -446,7 +446,7 @@ int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
 				goto alloc;
 			__set_bit(bit, bitmap->bitmap);
 		}
-		xas_store(&xas, bitmap);
+		xas_store_noinit(&xas, bitmap);
 	}
 out:
 	xas_unlock_irqrestore(&xas, flags);
@@ -502,7 +502,7 @@ void ida_free(struct ida *ida, unsigned int id)
 		v &= ~(1UL << bit);
 		if (!v)
 			goto delete;
-		xas_store(&xas, xa_mk_value(v));
+		xas_store_noinit(&xas, xa_mk_value(v));
 	} else {
 		if (!test_bit(bit, bitmap->bitmap))
 			goto err;
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 19/23] testing: Use xas_erase() to remove entries from xarray
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (17 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 18/23] idr: Convert xas_store() to xas_store_noinit() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 20/23] testing: Use xas_store_noinit() for non-NULL entries Jan Kara
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Use xas_erase() instead of xas_store(... NULL) for removing entries from
xarray.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/test_xarray.c               | 10 +++++-----
 tools/testing/radix-tree/test.c |  2 +-
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index d4f97925dbd8..434031fe8d54 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -318,7 +318,7 @@ static noinline void check_xa_shrink(struct xarray *xa)
 	XA_BUG_ON(xa, xas_load(&xas) != xa_mk_value(1));
 	node = xas.xa_node;
 	XA_BUG_ON(xa, xa_entry_locked(xa, node, 0) != xa_mk_value(0));
-	XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));
+	XA_BUG_ON(xa, xas_erase(&xas) != xa_mk_value(1));
 	XA_BUG_ON(xa, xa_load(xa, 1) != NULL);
 	XA_BUG_ON(xa, xas.xa_node != XAS_BOUNDS);
 	XA_BUG_ON(xa, xa_entry_locked(xa, node, 0) != XA_RETRY_ENTRY);
@@ -488,13 +488,13 @@ static noinline void check_xas_erase(struct xarray *xa)
 		} while (xas_nomem(&xas, GFP_KERNEL));
 
 		xas_lock(&xas);
-		xas_store(&xas, NULL);
+		xas_erase(&xas);
 
 		xas_set(&xas, 0);
 		j = i;
 		xas_for_each(&xas, entry, ULONG_MAX) {
 			XA_BUG_ON(xa, entry != xa_mk_index(j));
-			xas_store(&xas, NULL);
+			xas_erase(&xas);
 			j++;
 		}
 		xas_unlock(&xas);
@@ -537,7 +537,7 @@ static noinline void check_multi_store_2(struct xarray *xa, unsigned long index,
 	xas_lock(&xas);
 	XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));
 	XA_BUG_ON(xa, xas.xa_index != index);
-	XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));
+	XA_BUG_ON(xa, xas_erase(&xas) != xa_mk_value(1));
 	xas_unlock(&xas);
 	XA_BUG_ON(xa, !xa_empty(xa));
 }
@@ -1582,7 +1582,7 @@ static noinline void shadow_remove(struct xarray *xa)
 		xas.xa_offset = node->offset;
 		xas.xa_shift = node->shift + XA_CHUNK_SHIFT;
 		xas_set_update(&xas, test_update_node);
-		xas_store(&xas, NULL);
+		xas_erase(&xas);
 	}
 	xa_unlock(xa);
 }
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index a15d0512e633..07dc2b4dc587 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -261,7 +261,7 @@ void item_kill_tree(struct xarray *xa)
 		if (!xa_is_value(entry)) {
 			item_free(entry, xas.xa_index);
 		}
-		xas_store(&xas, NULL);
+		xas_erase(&xas);
 	}
 
 	assert(xa_empty(xa));
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 20/23] testing: Use xas_store_noinit() for non-NULL entries
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (18 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 19/23] testing: Use xas_erase() to remove entries from xarray Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 21/23] testing: Introduce xa_erase_order() and use it Jan Kara
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

When we store value different from NULL, xas_store_noinit() is
equivalent to xas_store(). Transition these places to
xas_store_noinit().

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/test_xarray.c                          | 28 ++++++++++++++--------------
 tools/testing/radix-tree/iteration_check.c |  2 +-
 tools/testing/radix-tree/multiorder.c      |  2 +-
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index 434031fe8d54..2cf3ef5d5014 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -130,13 +130,13 @@ static noinline void check_xas_retry(struct xarray *xa)
 	/* Make sure we can iterate through retry entries */
 	xas_lock(&xas);
 	xas_set(&xas, 0);
-	xas_store(&xas, XA_RETRY_ENTRY);
+	xas_store_noinit(&xas, XA_RETRY_ENTRY);
 	xas_set(&xas, 1);
-	xas_store(&xas, XA_RETRY_ENTRY);
+	xas_store_noinit(&xas, XA_RETRY_ENTRY);
 
 	xas_set(&xas, 0);
 	xas_for_each(&xas, entry, ULONG_MAX) {
-		xas_store(&xas, xa_mk_index(xas.xa_index));
+		xas_store_noinit(&xas, xa_mk_index(xas.xa_index));
 	}
 	xas_unlock(&xas);
 
@@ -475,7 +475,7 @@ static noinline void check_xas_erase(struct xarray *xa)
 			xas_set(&xas, j);
 			do {
 				xas_lock(&xas);
-				xas_store(&xas, xa_mk_index(j));
+				xas_store_noinit(&xas, xa_mk_index(j));
 				xas_unlock(&xas);
 			} while (xas_nomem(&xas, GFP_KERNEL));
 		}
@@ -483,7 +483,7 @@ static noinline void check_xas_erase(struct xarray *xa)
 		xas_set(&xas, ULONG_MAX);
 		do {
 			xas_lock(&xas);
-			xas_store(&xas, xa_mk_value(0));
+			xas_store_noinit(&xas, xa_mk_value(0));
 			xas_unlock(&xas);
 		} while (xas_nomem(&xas, GFP_KERNEL));
 
@@ -517,7 +517,7 @@ static noinline void check_multi_store_1(struct xarray *xa, unsigned long index,
 	XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL);
 
 	xas_lock(&xas);
-	XA_BUG_ON(xa, xas_store(&xas, xa_mk_index(min)) != xa_mk_index(index));
+	XA_BUG_ON(xa, xas_store_noinit(&xas, xa_mk_index(min)) != xa_mk_index(index));
 	xas_unlock(&xas);
 	XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_index(min));
 	XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_index(min));
@@ -535,7 +535,7 @@ static noinline void check_multi_store_2(struct xarray *xa, unsigned long index,
 	xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL);
 
 	xas_lock(&xas);
-	XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));
+	XA_BUG_ON(xa, xas_store_noinit(&xas, xa_mk_value(1)) != xa_mk_value(0));
 	XA_BUG_ON(xa, xas.xa_index != index);
 	XA_BUG_ON(xa, xas_erase(&xas) != xa_mk_value(1));
 	xas_unlock(&xas);
@@ -854,7 +854,7 @@ static noinline void __check_store_iter(struct xarray *xa, unsigned long start,
 		XA_BUG_ON(xa, entry > xa_mk_index(start + (1UL << order) - 1));
 		count++;
 	}
-	xas_store(&xas, xa_mk_index(start));
+	xas_store_noinit(&xas, xa_mk_index(start));
 	xas_unlock(&xas);
 	if (xas_nomem(&xas, GFP_KERNEL)) {
 		count = 0;
@@ -1365,7 +1365,7 @@ static noinline void xa_store_many_order(struct xarray *xa,
 		if (xas_error(&xas))
 			goto unlock;
 		for (i = 0; i < (1U << order); i++) {
-			XA_BUG_ON(xa, xas_store(&xas, xa_mk_index(index + i)));
+			XA_BUG_ON(xa, xas_store_noinit(&xas, xa_mk_index(index + i)));
 			xas_next(&xas);
 		}
 unlock:
@@ -1420,7 +1420,7 @@ static noinline void check_create_range_4(struct xarray *xa,
 		if (xas_error(&xas))
 			goto unlock;
 		for (i = 0; i < (1UL << order); i++) {
-			void *old = xas_store(&xas, xa_mk_index(base + i));
+			void *old = xas_store_noinit(&xas, xa_mk_index(base + i));
 			if (xas.xa_index == index)
 				XA_BUG_ON(xa, old != xa_mk_index(base + i));
 			else
@@ -1594,9 +1594,9 @@ static noinline void check_workingset(struct xarray *xa, unsigned long index)
 
 	do {
 		xas_lock(&xas);
-		xas_store(&xas, xa_mk_value(0));
+		xas_store_noinit(&xas, xa_mk_value(0));
 		xas_next(&xas);
-		xas_store(&xas, xa_mk_value(1));
+		xas_store_noinit(&xas, xa_mk_value(1));
 		xas_unlock(&xas);
 	} while (xas_nomem(&xas, GFP_KERNEL));
 
@@ -1604,10 +1604,10 @@ static noinline void check_workingset(struct xarray *xa, unsigned long index)
 
 	xas_lock(&xas);
 	xas_next(&xas);
-	xas_store(&xas, &xas);
+	xas_store_noinit(&xas, &xas);
 	XA_BUG_ON(xa, !list_empty(&shadow_nodes));
 
-	xas_store(&xas, xa_mk_value(2));
+	xas_store_noinit(&xas, xa_mk_value(2));
 	xas_unlock(&xas);
 	XA_BUG_ON(xa, list_empty(&shadow_nodes));
 
diff --git a/tools/testing/radix-tree/iteration_check.c b/tools/testing/radix-tree/iteration_check.c
index e9908bcb06dd..ce110e96757c 100644
--- a/tools/testing/radix-tree/iteration_check.c
+++ b/tools/testing/radix-tree/iteration_check.c
@@ -31,7 +31,7 @@ void my_item_insert(struct xarray *xa, unsigned long index)
 		item->order = order;
 		if (xas_find_conflict(&xas))
 			continue;
-		xas_store(&xas, item);
+		xas_store_noinit(&xas, item);
 		xas_set_mark(&xas, TAG);
 		break;
 	}
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 9eae0fb5a67d..695a562d5ff1 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -20,7 +20,7 @@ static int item_insert_order(struct xarray *xa, unsigned long index,
 
 	do {
 		xas_lock(&xas);
-		xas_store(&xas, item);
+		xas_store_noinit(&xas, item);
 		xas_unlock(&xas);
 	} while (xas_nomem(&xas, GFP_KERNEL));
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 21/23] testing: Introduce xa_erase_order() and use it
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (19 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 20/23] testing: Use xas_store_noinit() for non-NULL entries Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 22/23] testing: Switch xa_store_order() to xas_store_noinit() Jan Kara
  2020-04-22 15:02 ` [PATCH 23/23] xarray: Remove xas_store() Jan Kara
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

Introduce helper xa_erase_order() and call it from places that want to
erase entries from xarray instead of using xa_store_order(). This also
explicitely takes care of clearing possibly existing xarray marks
(although no current test seems to need it).

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/test_xarray.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index 2cf3ef5d5014..fc16eac1cbb9 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -83,6 +83,16 @@ static void *xa_store_order(struct xarray *xa, unsigned long index,
 	return curr;
 }
 
+static void xa_erase_order(struct xarray *xa, unsigned long index,
+		unsigned order)
+{
+	XA_STATE_ORDER(xas, xa, index, order);
+
+	xas_lock(&xas);
+	xas_erase(&xas);
+	xas_unlock(&xas);
+}
+
 static noinline void check_xa_err(struct xarray *xa)
 {
 	XA_BUG_ON(xa, xa_err(xa_store_index(xa, 0, GFP_NOWAIT)) != 0);
@@ -608,13 +618,13 @@ static noinline void check_multi_store(struct xarray *xa)
 	rcu_read_unlock();
 
 	/* We can erase multiple values with a single store */
-	xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL);
+	xa_erase_order(xa, 0, BITS_PER_LONG - 1);
 	XA_BUG_ON(xa, !xa_empty(xa));
 
 	/* Even when the first slot is empty but the others aren't */
 	xa_store_index(xa, 1, GFP_KERNEL);
 	xa_store_index(xa, 2, GFP_KERNEL);
-	xa_store_order(xa, 0, 2, NULL, GFP_KERNEL);
+	xa_erase_order(xa, 0, 2);
 	XA_BUG_ON(xa, !xa_empty(xa));
 
 	for (i = 0; i < max_order; i++) {
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 22/23] testing: Switch xa_store_order() to xas_store_noinit()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (20 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 21/23] testing: Introduce xa_erase_order() and use it Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  2020-04-22 15:02 ` [PATCH 23/23] xarray: Remove xas_store() Jan Kara
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

xa_store_order() is currently only used for non-NULL entries. So
xas_store_noinit() is equivalent to xas_store() and we can safely use it
in xa_store_order.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 lib/test_xarray.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index fc16eac1cbb9..3adc29819b0d 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -76,7 +76,7 @@ static void *xa_store_order(struct xarray *xa, unsigned long index,
 
 	do {
 		xas_lock(&xas);
-		curr = xas_store(&xas, entry);
+		curr = xas_store_noinit(&xas, entry);
 		xas_unlock(&xas);
 	} while (xas_nomem(&xas, gfp));
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 23/23] xarray: Remove xas_store()
  2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
                   ` (21 preceding siblings ...)
  2020-04-22 15:02 ` [PATCH 22/23] testing: Switch xa_store_order() to xas_store_noinit() Jan Kara
@ 2020-04-22 15:02 ` Jan Kara
  22 siblings, 0 replies; 24+ messages in thread
From: Jan Kara @ 2020-04-22 15:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, LKML, Jan Kara

xas_store() now has no users as every call site has been transitioned
either to xas_erase() or to xas_store_noinit(). Remove xas_store() and
all remaining traces of it.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 Documentation/core-api/xarray.rst |  4 ++--
 include/linux/xarray.h            |  1 -
 lib/xarray.c                      | 14 ++++----------
 3 files changed, 6 insertions(+), 13 deletions(-)

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index 640934b6f7b4..1c201628a15a 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -394,7 +394,7 @@ You can use xas_init_marks() to reset the marks on an entry
 to their default state.  This is usually all marks clear, unless the
 XArray is marked with ``XA_FLAGS_TRACK_FREE``, in which case mark 0 is set
 and all other marks are clear.  Replacing one entry with another using
-xas_store() will not reset the marks on that entry; if you want
+xas_store_noinit() will not reset the marks on that entry; if you want
 the marks reset, you should do that explicitly.
 
 The xas_load() will walk the xa_state as close to the entry
@@ -458,7 +458,7 @@ save substantial quantities of memory; for example tying 512 entries
 together will save over 4kB.
 
 You can create a multi-index entry by using XA_STATE_ORDER()
-or xas_set_order() followed by a call to xas_store().
+or xas_set_order() followed by a call to xas_store_noinit().
 Calling xas_load() with a multi-index xa_state will walk the
 xa_state to the right location in the tree, but the return value is not
 meaningful, potentially being an internal entry or ``NULL`` even when there
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 06acef49ec95..680c0dcaeb12 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -1490,7 +1490,6 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
 }
 
 void *xas_load(struct xa_state *);
-void *xas_store(struct xa_state *, void *entry);
 void *xas_store_noinit(struct xa_state *, void *entry);
 void *xas_erase(struct xa_state *);
 void *xas_find(struct xa_state *, unsigned long max);
diff --git a/lib/xarray.c b/lib/xarray.c
index d87045d120ad..2912d706dc01 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -625,7 +625,7 @@ static int xas_expand(struct xa_state *xas, void *head)
  * @allow_root: %true if we can store the entry in the root directly
  *
  * Most users will not need to call this function directly, as it is called
- * by xas_store().  It is useful for doing conditional store operations
+ * by xas_store_noinit().  It is useful for doing conditional store operations
  * (see the xa_cmpxchg() implementation for an example).
  *
  * Return: If the slot already existed, returns the contents of this slot.
@@ -820,7 +820,7 @@ static void *__xas_store(struct xa_state *xas, void *entry, bool init_marks)
 }
 
 /**
- * xas_store() - Store this entry in the XArray.
+ * xas_store_noinit() - Store this entry in the XArray.
  * @xas: XArray operation state.
  * @entry: New entry.
  *
@@ -828,17 +828,11 @@ static void *__xas_store(struct xa_state *xas, void *entry, bool init_marks)
  * function is essentially meaningless (it may be an internal entry or it
  * may be %NULL, even if there are non-NULL entries at some of the indices
  * covered by the range).  This is not a problem for any current users,
- * and can be changed if needed.
+ * and can be changed if needed. Note that this function does not affect
+ * xarray marks.
  *
  * Return: The old entry at this index.
  */
-void *xas_store(struct xa_state *xas, void *entry)
-{
-	return __xas_store(xas, entry, true);
-}
-EXPORT_SYMBOL_GPL(xas_store);
-
-/* This is like xas_store() but does not initialize marks on stored entry */
 void *xas_store_noinit(struct xa_state *xas, void *entry)
 {
 	return __xas_store(xas, entry, false);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-04-22 15:06 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-22 15:02 [PATCH 0/23 v2] mm: Speedup page cache truncation Jan Kara
2020-04-22 15:02 ` [PATCH 01/23] xarray: Remove stale comment Jan Kara
2020-04-22 15:02 ` [PATCH 02/23] xarray: Provide xas_erase() and xas_store_noinit() helpers Jan Kara
2020-04-22 15:02 ` [PATCH 03/23] xarray: Use xas_store_noinit() in __xa_store, __xa_insert, __xa_alloc Jan Kara
2020-04-22 15:02 ` [PATCH 04/23] xarray: Switch xa_store_range() to use xas_store_noinit() Jan Kara
2020-04-22 15:02 ` [PATCH 05/23] xarray: Use xas_erase() in __xa_erase() Jan Kara
2020-04-22 15:02 ` [PATCH 06/23] xarray: Explicitely set XA_FREE_MARK in __xa_cmpxchg() Jan Kara
2020-04-22 15:02 ` [PATCH 07/23] xarray: Switch __xa_cmpxchg() to use xas_store_noinit() Jan Kara
2020-04-22 15:02 ` [PATCH 08/23] dax: Use xas_erase() in __dax_invalidate_entry() Jan Kara
2020-04-22 15:02 ` [PATCH 09/23] dax: Use dax_store_noinit() in grab_mapping_entry() Jan Kara
2020-04-22 15:02 ` [PATCH 10/23] dax: Convert xas_store() to xas_store_noinit() Jan Kara
2020-04-22 15:02 ` [PATCH 11/23] mm: Use xas_erase() in page_cache_delete_batch() Jan Kara
2020-04-22 15:02 ` [PATCH 12/23] mm: Use xas_erase() in collapse_file() Jan Kara
2020-04-22 15:02 ` [PATCH 13/23] mm: Use xas_store_noinit() when storing non-NULL Jan Kara
2020-04-22 15:02 ` [PATCH 14/23] workingset: Use xas_store_noinit() to clear shadow entry Jan Kara
2020-04-22 15:02 ` [PATCH 15/23] swap: Use xas_erase() when removing page from swap cache Jan Kara
2020-04-22 15:02 ` [PATCH 16/23] idr: Use xas_erase() in ida_destroy() Jan Kara
2020-04-22 15:02 ` [PATCH 17/23] idr: Use xas_erase() in ida_free() Jan Kara
2020-04-22 15:02 ` [PATCH 18/23] idr: Convert xas_store() to xas_store_noinit() Jan Kara
2020-04-22 15:02 ` [PATCH 19/23] testing: Use xas_erase() to remove entries from xarray Jan Kara
2020-04-22 15:02 ` [PATCH 20/23] testing: Use xas_store_noinit() for non-NULL entries Jan Kara
2020-04-22 15:02 ` [PATCH 21/23] testing: Introduce xa_erase_order() and use it Jan Kara
2020-04-22 15:02 ` [PATCH 22/23] testing: Switch xa_store_order() to xas_store_noinit() Jan Kara
2020-04-22 15:02 ` [PATCH 23/23] xarray: Remove xas_store() Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).