All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops
@ 2020-09-29 17:44 Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Darrick J. Wong @ 2020-09-29 17:44 UTC (permalink / raw)
  To: darrick.wong; +Cc: Dave Chinner, Brian Foster, linux-xfs, david, bfoster

Hi all,

This last series tries to fix some structural problems in the defer ops
code.  The defer ops code has been finishing items in the wrong order --
if a top level defer op creates items A and B, and finishing item A
creates more defer ops A1 and A2, we'll put the new items on the end of
the chain and process them in the order A B A1 A2.  This is kind of
weird, since it's convenient for programmers to be able to think of A
and B as an ordered sequence where all the work for A must finish before
we move on to B, e.g. A A1 A2 D.

That isn't how the defer ops actually works, but so far we've been lucky
that this hasn't ever caused serious problems.  This /will/, however,
when we get to the atomic extent swap code, where for refcounting
purposes it actually /does/ matter that unmap and map child intents
execute in that order, and complete before we move on to the next extent
in the files.  This also causes a very long chain of intent items to
build up, which can exhaust memory.

We need to teach defer ops to finish all the sub-work associated with
each defer op that the caller gave us, to minimize the length of the
defer ops chains; and then we need to teach it to relog items
periodically to avoid pinning the log tail.

v2: combine all the relog patches into one, and base the decision to
relog an iten dependent on whether or not it's in an old checkpoint
v3: fix backwards logic, don't relog items in the same checkpoint,
and split up the changes
v4: fix some comments, split the log changes into a separate patch
v5: add a defer ops relog counter to the xfs stats file

If you're going to start using this mess, you probably ought to just
pull from my git trees, which are linked below.

This is an extraordinary way to destroy everything.  Enjoy!
Comments and questions are, as always, welcome.

--D

kernel git tree:
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git/log/?h=defer-ops-stalls-5.10
---
 fs/xfs/libxfs/xfs_defer.c  |   69 +++++++++++++++++++++++++++++++++++++++++++-
 fs/xfs/xfs_bmap_item.c     |   27 +++++++++++++++++
 fs/xfs/xfs_extfree_item.c  |   29 ++++++++++++++++++
 fs/xfs/xfs_log.c           |   40 +++++++++++++++++++-------
 fs/xfs/xfs_log.h           |    2 +
 fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++
 fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++
 fs/xfs/xfs_stats.c         |    4 +++
 fs/xfs/xfs_stats.h         |    1 +
 fs/xfs/xfs_trace.h         |    1 +
 fs/xfs/xfs_trans.h         |   10 ++++++
 11 files changed, 226 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished
  2020-09-29 17:44 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
@ 2020-09-29 17:44 ` Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 2/4] xfs: periodically relog deferred intent items Darrick J. Wong
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2020-09-29 17:44 UTC (permalink / raw)
  To: darrick.wong; +Cc: Dave Chinner, Brian Foster, linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

The defer ops code has been finishing items in the wrong order -- if a
top level defer op creates items A and B, and finishing item A creates
more defer ops A1 and A2, we'll put the new items on the end of the
chain and process them in the order A B A1 A2.  This is kind of weird,
since it's convenient for programmers to be able to think of A and B as
an ordered sequence where all the sub-tasks for A must finish before we
move on to B, e.g. A A1 A2 D.

Right now, our log intent items are not so complex that this matters,
but this will become important for the atomic extent swapping patchset.
In order to maintain correct reference counting of extents, we have to
unmap and remap extents in that order, and we want to complete that work
before moving on to the next range that the user wants to swap.  This
patch fixes defer ops to satsify that requirement.

The primary symptom of the incorrect order was noticed in an early
performance analysis of the atomic extent swap code.  An astonishingly
large number of deferred work items accumulated when userspace requested
an atomic update of two very fragmented files.  The cause of this was
traced to the same ordering bug in the inner loop of
xfs_defer_finish_noroll.

If the ->finish_item method of a deferred operation queues new deferred
operations, those new deferred ops are appended to the tail of the
pending work list.  To illustrate, say that a caller creates a
transaction t0 with four deferred operations D0-D3.  The first thing
defer ops does is roll the transaction to t1, leaving us with:

t1: D0(t0), D1(t0), D2(t0), D3(t0)

Let's say that finishing each of D0-D3 will create two new deferred ops.
After finish D0 and roll, we'll have the following chain:

t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)

d4 and d5 were logged to t1.  Notice that while we're about to start
work on D1, we haven't actually completed all the work implied by D0
being finished.  So far we've been careful (or lucky) to structure the
dfops callers such that D1 doesn't depend on d4 or d5 being finished,
but this is a potential logic bomb.

There's a second problem lurking.  Let's see what happens as we finish
D1-D3:

t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2)
t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3)
t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)

Let's say that d4-d11 are simple work items that don't queue any other
operations, which means that we can complete each d4 and roll to t6:

t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
...
t11: d10(t4), d11(t4)
t12: d11(t4)
<done>

When we try to roll to transaction #12, we're holding defer op d11,
which we logged way back in t4.  This means that the tail of the log is
pinned at t4.  If the log is very small or there are a lot of other
threads updating metadata, this means that we might have wrapped the log
and cannot get roll to t11 because there isn't enough space left before
we'd run into t4.

Let's shift back to the original failure.  I mentioned before that I
discovered this flaw while developing the atomic file update code.  In
that scenario, we have a defer op (D0) that finds a range of file blocks
to remap, creates a handful of new defer ops to do that, and then asks
to be continued with however much work remains.

So, D0 is the original swapext deferred op.  The first thing defer ops
does is rolls to t1:

t1: D0(t0)

We try to finish D0, logging d1 and d2 in the process, but can't get all
the work done.  We log a done item and a new intent item for the work
that D0 still has to do, and roll to t2:

t2: D0'(t1), d1(t1), d2(t1)

We roll and try to finish D0', but still can't get all the work done, so
we log a done item and a new intent item for it, requeue D0 a second
time, and roll to t3:

t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)

If it takes 48 more rolls to complete D0, then we'll finally dispense
with D0 in t50:

t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)

We then try to roll again to get a chain like this:

t51: d1(t1), d2(t1), ..., d101(t50), d102(t50)
...
t152: d102(t50)
<done>

Notice that in rolling to transaction #51, we're holding on to a log
intent item for d1 that was logged in transaction #1.  This means that
the tail of the log is pinned at t1.  If the log is very small or there
are a lot of other threads updating metadata, this means that we might
have wrapped the log and cannot roll to t51 because there isn't enough
space left before we'd run into t1.  This is of course problem #2 again.

But notice the third problem with this scenario: we have 102 defer ops
tied to this transaction!  Each of these items are backed by pinned
kernel memory, which means that we risk OOM if the chains get too long.

Yikes.  Problem #1 is a subtle logic bomb that could hit someone in the
future; problem #2 applies (rarely) to the current upstream, and problem
#3 applies to work under development.

This is not how incremental deferred operations were supposed to work.
The dfops design of logging in the same transaction an intent-done item
and a new intent item for the work remaining was to make it so that we
only have to juggle enough deferred work items to finish that one small
piece of work.  Deferred log item recovery will find that first
unfinished work item and restart it, no matter how many other intent
items might follow it in the log.  Therefore, it's ok to put the new
intents at the start of the dfops chain.

For the first example, the chains look like this:

t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
t3: d5(t1), D1(t0), D2(t0), D3(t0)
...
t9: d9(t7), D3(t0)
t10: D3(t0)
t11: d10(t10), d11(t10)
t12: d11(t10)

For the second example, the chains look like this:

t1: D0(t0)
t2: d1(t1), d2(t1), D0'(t1)
t3: d2(t1), D0'(t1)
t4: D0'(t1)
t5: d1(t4), d2(t4), D0''(t4)
...
t148: D0<50 primes>(t147)
t149: d101(t148), d102(t148)
t150: d102(t148)
<done>

This actually sucks more for pinning the log tail (we try to roll to t10
while holding an intent item that was logged in t1) but we've solved
problem #1.  We've also reduced the maximum chain length from:

    sum(all the new items) + nr_original_items

to:

    max(new items that each original item creates) + nr_original_items

This solves problem #3 by sharply reducing the number of defer ops that
can be attached to a transaction at any given time.  The change makes
the problem of log tail pinning worse, but is improvement we need to
solve problem #2.  Actually solving #2, however, is left to the next
patch.

Note that a subsequent analysis of some hard-to-trigger reflink and COW
livelocks on extremely fragmented filesystems (or systems running a lot
of IO threads) showed the same symptoms -- uncomfortably large numbers
of incore deferred work items and occasional stalls in the transaction
grant code while waiting for log reservations.  I think this patch and
the next one will also solve these problems.

As originally written, the code used list_splice_tail_init instead of
list_splice_init, so change that, and leave a short comment explaining
our actions.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index c466a3177acc..81084c34138f 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -415,8 +415,17 @@ xfs_defer_finish_noroll(
 
 	/* Until we run out of pending work to finish... */
 	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
+		/*
+		 * Deferred items that are created in the process of finishing
+		 * other deferred work items should be queued at the head of
+		 * the pending list, which puts them ahead of the deferred work
+		 * that was created by the caller.  This keeps the number of
+		 * pending work items to a minimum, which decreases the amount
+		 * of time that any one intent item can stick around in memory,
+		 * pinning the log tail.
+		 */
 		xfs_defer_create_intents(*tp);
-		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
+		list_splice_init(&(*tp)->t_dfops, &dop_pending);
 
 		error = xfs_defer_trans_roll(tp);
 		if (error)


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4] xfs: periodically relog deferred intent items
  2020-09-29 17:44 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
@ 2020-09-29 17:44 ` Darrick J. Wong
  2020-10-01 16:02   ` Brian Foster
  2020-09-29 17:44 ` [PATCH 3/4] xfs: expose the log push threshold Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low Darrick J. Wong
  3 siblings, 1 reply; 12+ messages in thread
From: Darrick J. Wong @ 2020-09-29 17:44 UTC (permalink / raw)
  To: darrick.wong; +Cc: linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

There's a subtle design flaw in the deferred log item code that can lead
to pinning the log tail.  Taking up the defer ops chain examples from
the previous commit, we can get trapped in sequences like this:

Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
chain will look like the following if the transaction rolls succeed:

t1: D0(t0), D1(t0), D2(t0), D3(t0)
t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
t3: d5(t1), D1(t0), D2(t0), D3(t0)
...
t9: d9(t7), D3(t0)
t10: D3(t0)
t11: d10(t10), d11(t10)
t12: d11(t10)

In transaction 9, we finish d9 and try to roll to t10 while holding onto
an intent item for D3 that we logged in t0.

The previous commit changed the order in which we place new defer ops in
the defer ops processing chain to reduce the maximum chain length.  Now
make xfs_defer_finish_noroll capable of relogging the entire chain
periodically so that we can always move the log tail forward.  Most
chains will never get relogged, except for operations that generate very
long chains (large extents containing many blocks with different sharing
levels) or are on filesystems with small logs and a lot of ongoing
metadata updates.

Callers are now required to ensure that the transaction reservation is
large enough to handle logging done items and new intent items for the
maximum possible chain length.  Most callers are careful to keep the
chain lengths low, so the overhead should be minimal.

The decision to relog an intent item is made based on whether the intent
was logged in a previous checkpoint, since there's no point in relogging
an intent into the same checkpoint.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/libxfs/xfs_defer.c  |   42 ++++++++++++++++++++++++++++++++++++++++++
 fs/xfs/xfs_bmap_item.c     |   27 +++++++++++++++++++++++++++
 fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
 fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
 fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
 fs/xfs/xfs_stats.c         |    4 ++++
 fs/xfs/xfs_stats.h         |    1 +
 fs/xfs/xfs_trace.h         |    1 +
 fs/xfs/xfs_trans.h         |   10 ++++++++++
 9 files changed, 168 insertions(+)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 81084c34138f..554777d1069c 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -17,6 +17,7 @@
 #include "xfs_inode_item.h"
 #include "xfs_trace.h"
 #include "xfs_icache.h"
+#include "xfs_log.h"
 
 /*
  * Deferred Operations in XFS
@@ -345,6 +346,42 @@ xfs_defer_cancel_list(
 	}
 }
 
+/*
+ * Prevent a log intent item from pinning the tail of the log by logging a
+ * done item to release the intent item; and then log a new intent item.
+ * The caller should provide a fresh transaction and roll it after we're done.
+ */
+static int
+xfs_defer_relog(
+	struct xfs_trans		**tpp,
+	struct list_head		*dfops)
+{
+	struct xfs_defer_pending	*dfp;
+
+	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
+
+	list_for_each_entry(dfp, dfops, dfp_list) {
+		/*
+		 * If the log intent item for this deferred op is not a part of
+		 * the current log checkpoint, relog the intent item to keep
+		 * the log tail moving forward.  We're ok with this being racy
+		 * because an incorrect decision means we'll be a little slower
+		 * at pushing the tail.
+		 */
+		if (dfp->dfp_intent == NULL ||
+		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
+			continue;
+
+		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
+		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
+		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
+	}
+
+	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
+		return xfs_defer_trans_roll(tpp);
+	return 0;
+}
+
 /*
  * Log an intent-done item for the first pending intent, and finish the work
  * items.
@@ -431,6 +468,11 @@ xfs_defer_finish_noroll(
 		if (error)
 			goto out_shutdown;
 
+		/* Possibly relog intent items to keep the log moving. */
+		error = xfs_defer_relog(tp, &dop_pending);
+		if (error)
+			goto out_shutdown;
+
 		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
 				       dfp_list);
 		error = xfs_defer_finish_one(*tp, dfp);
diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
index 0ffbc75bafe1..1cca93d3d743 100644
--- a/fs/xfs/xfs_bmap_item.c
+++ b/fs/xfs/xfs_bmap_item.c
@@ -532,6 +532,32 @@ xfs_bui_item_match(
 	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_bui_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_bud_log_item		*budp;
+	struct xfs_bui_log_item		*buip;
+	struct xfs_map_extent		*extp;
+	unsigned int			count;
+
+	count = BUI_ITEM(intent)->bui_format.bui_nextents;
+	extp = BUI_ITEM(intent)->bui_format.bui_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
+	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
+
+	buip = xfs_bui_init(tp->t_mountp);
+	memcpy(buip->bui_format.bui_extents, extp, count * sizeof(*extp));
+	atomic_set(&buip->bui_next_extent, count);
+	xfs_trans_add_item(tp, &buip->bui_item);
+	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
+	return &buip->bui_item;
+}
+
 static const struct xfs_item_ops xfs_bui_item_ops = {
 	.iop_size	= xfs_bui_item_size,
 	.iop_format	= xfs_bui_item_format,
@@ -539,6 +565,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
 	.iop_release	= xfs_bui_item_release,
 	.iop_recover	= xfs_bui_item_recover,
 	.iop_match	= xfs_bui_item_match,
+	.iop_relog	= xfs_bui_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
index 3920542f5736..6c11bfc3d452 100644
--- a/fs/xfs/xfs_extfree_item.c
+++ b/fs/xfs/xfs_extfree_item.c
@@ -642,6 +642,34 @@ xfs_efi_item_match(
 	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_efi_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_efd_log_item		*efdp;
+	struct xfs_efi_log_item		*efip;
+	struct xfs_extent		*extp;
+	unsigned int			count;
+
+	count = EFI_ITEM(intent)->efi_format.efi_nextents;
+	extp = EFI_ITEM(intent)->efi_format.efi_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
+	efdp->efd_next_extent = count;
+	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
+	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
+
+	efip = xfs_efi_init(tp->t_mountp, count);
+	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
+	atomic_set(&efip->efi_next_extent, count);
+	xfs_trans_add_item(tp, &efip->efi_item);
+	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
+	return &efip->efi_item;
+}
+
 static const struct xfs_item_ops xfs_efi_item_ops = {
 	.iop_size	= xfs_efi_item_size,
 	.iop_format	= xfs_efi_item_format,
@@ -649,6 +677,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
 	.iop_release	= xfs_efi_item_release,
 	.iop_recover	= xfs_efi_item_recover,
 	.iop_match	= xfs_efi_item_match,
+	.iop_relog	= xfs_efi_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
index ad895b48f365..7529eb63ce94 100644
--- a/fs/xfs/xfs_refcount_item.c
+++ b/fs/xfs/xfs_refcount_item.c
@@ -560,6 +560,32 @@ xfs_cui_item_match(
 	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_cui_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_cud_log_item		*cudp;
+	struct xfs_cui_log_item		*cuip;
+	struct xfs_phys_extent		*extp;
+	unsigned int			count;
+
+	count = CUI_ITEM(intent)->cui_format.cui_nextents;
+	extp = CUI_ITEM(intent)->cui_format.cui_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
+	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
+
+	cuip = xfs_cui_init(tp->t_mountp, count);
+	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
+	atomic_set(&cuip->cui_next_extent, count);
+	xfs_trans_add_item(tp, &cuip->cui_item);
+	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
+	return &cuip->cui_item;
+}
+
 static const struct xfs_item_ops xfs_cui_item_ops = {
 	.iop_size	= xfs_cui_item_size,
 	.iop_format	= xfs_cui_item_format,
@@ -567,6 +593,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
 	.iop_release	= xfs_cui_item_release,
 	.iop_recover	= xfs_cui_item_recover,
 	.iop_match	= xfs_cui_item_match,
+	.iop_relog	= xfs_cui_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
index 1163f32c3e62..7adc996ca6e3 100644
--- a/fs/xfs/xfs_rmap_item.c
+++ b/fs/xfs/xfs_rmap_item.c
@@ -583,6 +583,32 @@ xfs_rui_item_match(
 	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_rui_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_rud_log_item		*rudp;
+	struct xfs_rui_log_item		*ruip;
+	struct xfs_map_extent		*extp;
+	unsigned int			count;
+
+	count = RUI_ITEM(intent)->rui_format.rui_nextents;
+	extp = RUI_ITEM(intent)->rui_format.rui_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
+	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
+
+	ruip = xfs_rui_init(tp->t_mountp, count);
+	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
+	atomic_set(&ruip->rui_next_extent, count);
+	xfs_trans_add_item(tp, &ruip->rui_item);
+	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
+	return &ruip->rui_item;
+}
+
 static const struct xfs_item_ops xfs_rui_item_ops = {
 	.iop_size	= xfs_rui_item_size,
 	.iop_format	= xfs_rui_item_format,
@@ -590,6 +616,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
 	.iop_release	= xfs_rui_item_release,
 	.iop_recover	= xfs_rui_item_recover,
 	.iop_match	= xfs_rui_item_match,
+	.iop_relog	= xfs_rui_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_stats.c b/fs/xfs/xfs_stats.c
index f70f1255220b..20e0534a772c 100644
--- a/fs/xfs/xfs_stats.c
+++ b/fs/xfs/xfs_stats.c
@@ -23,6 +23,7 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
 	uint64_t	xs_xstrat_bytes = 0;
 	uint64_t	xs_write_bytes = 0;
 	uint64_t	xs_read_bytes = 0;
+	uint64_t	defer_relog = 0;
 
 	static const struct xstats_entry {
 		char	*desc;
@@ -70,10 +71,13 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
 		xs_xstrat_bytes += per_cpu_ptr(stats, i)->s.xs_xstrat_bytes;
 		xs_write_bytes += per_cpu_ptr(stats, i)->s.xs_write_bytes;
 		xs_read_bytes += per_cpu_ptr(stats, i)->s.xs_read_bytes;
+		defer_relog += per_cpu_ptr(stats, i)->s.defer_relog;
 	}
 
 	len += scnprintf(buf + len, PATH_MAX-len, "xpc %Lu %Lu %Lu\n",
 			xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
+	len += scnprintf(buf + len, PATH_MAX-len, "defer_relog %llu\n",
+			defer_relog);
 	len += scnprintf(buf + len, PATH_MAX-len, "debug %u\n",
 #if defined(DEBUG)
 		1);
diff --git a/fs/xfs/xfs_stats.h b/fs/xfs/xfs_stats.h
index 34d704f703d2..43ffba74f045 100644
--- a/fs/xfs/xfs_stats.h
+++ b/fs/xfs/xfs_stats.h
@@ -137,6 +137,7 @@ struct __xfsstats {
 	uint64_t		xs_xstrat_bytes;
 	uint64_t		xs_write_bytes;
 	uint64_t		xs_read_bytes;
+	uint64_t		defer_relog;
 };
 
 #define	xfsstats_offset(f)	(offsetof(struct __xfsstats, f)/sizeof(uint32_t))
diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
index dcdcf99cfa5d..86951652d3ed 100644
--- a/fs/xfs/xfs_trace.h
+++ b/fs/xfs/xfs_trace.h
@@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
 DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
 DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
 DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
+DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
 
 #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
 DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
index 186e77d08cc1..084658946cc8 100644
--- a/fs/xfs/xfs_trans.h
+++ b/fs/xfs/xfs_trans.h
@@ -75,6 +75,8 @@ struct xfs_item_ops {
 	int (*iop_recover)(struct xfs_log_item *lip,
 			   struct list_head *capture_list);
 	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
+	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
+			struct xfs_trans *tp);
 };
 
 /* Is this log item a deferred action intent? */
@@ -258,4 +260,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
 
 extern kmem_zone_t	*xfs_trans_zone;
 
+static inline struct xfs_log_item *
+xfs_trans_item_relog(
+	struct xfs_log_item	*lip,
+	struct xfs_trans	*tp)
+{
+	return lip->li_ops->iop_relog(lip, tp);
+}
+
 #endif	/* __XFS_TRANS_H__ */


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4] xfs: expose the log push threshold
  2020-09-29 17:44 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 2/4] xfs: periodically relog deferred intent items Darrick J. Wong
@ 2020-09-29 17:44 ` Darrick J. Wong
  2020-09-29 17:44 ` [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low Darrick J. Wong
  3 siblings, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2020-09-29 17:44 UTC (permalink / raw)
  To: darrick.wong; +Cc: Brian Foster, linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

Separate the computation of the log push threshold and the push logic in
xlog_grant_push_ail.  This enables higher level code to determine (for
example) that it is holding on to a logged intent item and the log is so
busy that it is more than 75% full.  In that case, it would be desirable
to move the log item towards the head to release the tail, which we will
cover in the next patch.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_log.c |   40 ++++++++++++++++++++++++++++++----------
 fs/xfs/xfs_log.h |    2 ++
 2 files changed, 32 insertions(+), 10 deletions(-)


diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index 7a4ba408a3a2..fa2d05e65ff1 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -1475,14 +1475,14 @@ xlog_commit_record(
 }
 
 /*
- * Push on the buffer cache code if we ever use more than 75% of the on-disk
- * log space.  This code pushes on the lsn which would supposedly free up
- * the 25% which we want to leave free.  We may need to adopt a policy which
- * pushes on an lsn which is further along in the log once we reach the high
- * water mark.  In this manner, we would be creating a low water mark.
+ * Compute the LSN that we'd need to push the log tail towards in order to have
+ * (a) enough on-disk log space to log the number of bytes specified, (b) at
+ * least 25% of the log space free, and (c) at least 256 blocks free.  If the
+ * log free space already meets all three thresholds, this function returns
+ * NULLCOMMITLSN.
  */
-STATIC void
-xlog_grant_push_ail(
+xfs_lsn_t
+xlog_grant_push_threshold(
 	struct xlog	*log,
 	int		need_bytes)
 {
@@ -1508,7 +1508,7 @@ xlog_grant_push_ail(
 	free_threshold = max(free_threshold, (log->l_logBBsize >> 2));
 	free_threshold = max(free_threshold, 256);
 	if (free_blocks >= free_threshold)
-		return;
+		return NULLCOMMITLSN;
 
 	xlog_crack_atomic_lsn(&log->l_tail_lsn, &threshold_cycle,
 						&threshold_block);
@@ -1528,13 +1528,33 @@ xlog_grant_push_ail(
 	if (XFS_LSN_CMP(threshold_lsn, last_sync_lsn) > 0)
 		threshold_lsn = last_sync_lsn;
 
+	return threshold_lsn;
+}
+
+/*
+ * Push the tail of the log if we need to do so to maintain the free log space
+ * thresholds set out by xlog_grant_push_threshold.  We may need to adopt a
+ * policy which pushes on an lsn which is further along in the log once we
+ * reach the high water mark.  In this manner, we would be creating a low water
+ * mark.
+ */
+STATIC void
+xlog_grant_push_ail(
+	struct xlog	*log,
+	int		need_bytes)
+{
+	xfs_lsn_t	threshold_lsn;
+
+	threshold_lsn = xlog_grant_push_threshold(log, need_bytes);
+	if (threshold_lsn == NULLCOMMITLSN || XLOG_FORCED_SHUTDOWN(log))
+		return;
+
 	/*
 	 * Get the transaction layer to kick the dirty buffers out to
 	 * disk asynchronously. No point in trying to do this if
 	 * the filesystem is shutting down.
 	 */
-	if (!XLOG_FORCED_SHUTDOWN(log))
-		xfs_ail_push(log->l_ailp, threshold_lsn);
+	xfs_ail_push(log->l_ailp, threshold_lsn);
 }
 
 /*
diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h
index 1412d6993f1e..58c3fcbec94a 100644
--- a/fs/xfs/xfs_log.h
+++ b/fs/xfs/xfs_log.h
@@ -141,4 +141,6 @@ void	xfs_log_quiesce(struct xfs_mount *mp);
 bool	xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t);
 bool	xfs_log_in_recovery(struct xfs_mount *);
 
+xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes);
+
 #endif	/* __XFS_LOG_H__ */


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low
  2020-09-29 17:44 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
                   ` (2 preceding siblings ...)
  2020-09-29 17:44 ` [PATCH 3/4] xfs: expose the log push threshold Darrick J. Wong
@ 2020-09-29 17:44 ` Darrick J. Wong
  2020-10-01 16:02   ` Brian Foster
  3 siblings, 1 reply; 12+ messages in thread
From: Darrick J. Wong @ 2020-09-29 17:44 UTC (permalink / raw)
  To: darrick.wong; +Cc: linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

Now that we have the ability to ask the log how far the tail needs to be
pushed to maintain its free space targets, augment the decision to relog
an intent item so that we only do it if the log has hit the 75% full
threshold.  There's no point in relogging an intent into the same
checkpoint, and there's no need to relog if there's plenty of free space
in the log.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/libxfs/xfs_defer.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 554777d1069c..2ba02f2e59a1 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -356,7 +356,10 @@ xfs_defer_relog(
 	struct xfs_trans		**tpp,
 	struct list_head		*dfops)
 {
+	struct xlog			*log = (*tpp)->t_mountp->m_log;
 	struct xfs_defer_pending	*dfp;
+	xfs_lsn_t			threshold_lsn = NULLCOMMITLSN;
+
 
 	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
 
@@ -372,6 +375,19 @@ xfs_defer_relog(
 		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
 			continue;
 
+		/*
+		 * Figure out where we need the tail to be in order to maintain
+		 * the minimum required free space in the log.  Only sample
+		 * the log threshold once per call.
+		 */
+		if (threshold_lsn == NULLCOMMITLSN) {
+			threshold_lsn = xlog_grant_push_threshold(log, 0);
+			if (threshold_lsn == NULLCOMMITLSN)
+				break;
+		}
+		if (XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) >= 0)
+			continue;
+
 		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
 		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
 		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] xfs: periodically relog deferred intent items
  2020-09-29 17:44 ` [PATCH 2/4] xfs: periodically relog deferred intent items Darrick J. Wong
@ 2020-10-01 16:02   ` Brian Foster
  2020-10-01 17:22     ` Darrick J. Wong
  0 siblings, 1 reply; 12+ messages in thread
From: Brian Foster @ 2020-10-01 16:02 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, david

On Tue, Sep 29, 2020 at 10:44:22AM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> There's a subtle design flaw in the deferred log item code that can lead
> to pinning the log tail.  Taking up the defer ops chain examples from
> the previous commit, we can get trapped in sequences like this:
> 
> Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> chain will look like the following if the transaction rolls succeed:
> 
> t1: D0(t0), D1(t0), D2(t0), D3(t0)
> t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> t3: d5(t1), D1(t0), D2(t0), D3(t0)
> ...
> t9: d9(t7), D3(t0)
> t10: D3(t0)
> t11: d10(t10), d11(t10)
> t12: d11(t10)
> 
> In transaction 9, we finish d9 and try to roll to t10 while holding onto
> an intent item for D3 that we logged in t0.
> 
> The previous commit changed the order in which we place new defer ops in
> the defer ops processing chain to reduce the maximum chain length.  Now
> make xfs_defer_finish_noroll capable of relogging the entire chain
> periodically so that we can always move the log tail forward.  Most
> chains will never get relogged, except for operations that generate very
> long chains (large extents containing many blocks with different sharing
> levels) or are on filesystems with small logs and a lot of ongoing
> metadata updates.
> 
> Callers are now required to ensure that the transaction reservation is
> large enough to handle logging done items and new intent items for the
> maximum possible chain length.  Most callers are careful to keep the
> chain lengths low, so the overhead should be minimal.
> 
> The decision to relog an intent item is made based on whether the intent
> was logged in a previous checkpoint, since there's no point in relogging
> an intent into the same checkpoint.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_defer.c  |   42 ++++++++++++++++++++++++++++++++++++++++++
>  fs/xfs/xfs_bmap_item.c     |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
>  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_stats.c         |    4 ++++
>  fs/xfs/xfs_stats.h         |    1 +
>  fs/xfs/xfs_trace.h         |    1 +
>  fs/xfs/xfs_trans.h         |   10 ++++++++++
>  9 files changed, 168 insertions(+)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 81084c34138f..554777d1069c 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -17,6 +17,7 @@
>  #include "xfs_inode_item.h"
>  #include "xfs_trace.h"
>  #include "xfs_icache.h"
> +#include "xfs_log.h"
>  
>  /*
>   * Deferred Operations in XFS
> @@ -345,6 +346,42 @@ xfs_defer_cancel_list(
>  	}
>  }
>  
> +/*
> + * Prevent a log intent item from pinning the tail of the log by logging a
> + * done item to release the intent item; and then log a new intent item.
> + * The caller should provide a fresh transaction and roll it after we're done.
> + */
> +static int
> +xfs_defer_relog(
> +	struct xfs_trans		**tpp,
> +	struct list_head		*dfops)
> +{
> +	struct xfs_defer_pending	*dfp;
> +
> +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> +
> +	list_for_each_entry(dfp, dfops, dfp_list) {
> +		/*
> +		 * If the log intent item for this deferred op is not a part of
> +		 * the current log checkpoint, relog the intent item to keep
> +		 * the log tail moving forward.  We're ok with this being racy
> +		 * because an incorrect decision means we'll be a little slower
> +		 * at pushing the tail.
> +		 */
> +		if (dfp->dfp_intent == NULL ||
> +		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
> +			continue;
> +

As discussed in the previous thread and on IRC, I'm a little concerned
that this could result in undesireable behavior in certain cases. For
example, if the chain grows too large for a transaction (since we don't
explicitly account intents in transaction reservation afaict), if we
have many of them hammering the log all at once, etc. That said, I don't
recall whether the previous versions included the transaction roll below
after relogging one or more intents or I just missed it, but that
mitigates one of my concerns because I was thinking we'd log the chain
in the same transaction used for the next dfop on the list.

In any event, I'm fine with it if this is preferred and has been
sufficiently stress tested:

Reviewed-by: Brian Foster <bfoster@redhat.com>

> +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> +		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
> +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> +	}
> +
> +	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
> +		return xfs_defer_trans_roll(tpp);
> +	return 0;
> +}
> +
>  /*
>   * Log an intent-done item for the first pending intent, and finish the work
>   * items.
> @@ -431,6 +468,11 @@ xfs_defer_finish_noroll(
>  		if (error)
>  			goto out_shutdown;
>  
> +		/* Possibly relog intent items to keep the log moving. */
> +		error = xfs_defer_relog(tp, &dop_pending);
> +		if (error)
> +			goto out_shutdown;
> +
>  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
>  				       dfp_list);
>  		error = xfs_defer_finish_one(*tp, dfp);
> diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
> index 0ffbc75bafe1..1cca93d3d743 100644
> --- a/fs/xfs/xfs_bmap_item.c
> +++ b/fs/xfs/xfs_bmap_item.c
> @@ -532,6 +532,32 @@ xfs_bui_item_match(
>  	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_bui_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_bud_log_item		*budp;
> +	struct xfs_bui_log_item		*buip;
> +	struct xfs_map_extent		*extp;
> +	unsigned int			count;
> +
> +	count = BUI_ITEM(intent)->bui_format.bui_nextents;
> +	extp = BUI_ITEM(intent)->bui_format.bui_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
> +	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
> +
> +	buip = xfs_bui_init(tp->t_mountp);
> +	memcpy(buip->bui_format.bui_extents, extp, count * sizeof(*extp));
> +	atomic_set(&buip->bui_next_extent, count);
> +	xfs_trans_add_item(tp, &buip->bui_item);
> +	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
> +	return &buip->bui_item;
> +}
> +
>  static const struct xfs_item_ops xfs_bui_item_ops = {
>  	.iop_size	= xfs_bui_item_size,
>  	.iop_format	= xfs_bui_item_format,
> @@ -539,6 +565,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
>  	.iop_release	= xfs_bui_item_release,
>  	.iop_recover	= xfs_bui_item_recover,
>  	.iop_match	= xfs_bui_item_match,
> +	.iop_relog	= xfs_bui_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
> index 3920542f5736..6c11bfc3d452 100644
> --- a/fs/xfs/xfs_extfree_item.c
> +++ b/fs/xfs/xfs_extfree_item.c
> @@ -642,6 +642,34 @@ xfs_efi_item_match(
>  	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_efi_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_efd_log_item		*efdp;
> +	struct xfs_efi_log_item		*efip;
> +	struct xfs_extent		*extp;
> +	unsigned int			count;
> +
> +	count = EFI_ITEM(intent)->efi_format.efi_nextents;
> +	extp = EFI_ITEM(intent)->efi_format.efi_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
> +	efdp->efd_next_extent = count;
> +	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
> +	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
> +
> +	efip = xfs_efi_init(tp->t_mountp, count);
> +	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
> +	atomic_set(&efip->efi_next_extent, count);
> +	xfs_trans_add_item(tp, &efip->efi_item);
> +	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
> +	return &efip->efi_item;
> +}
> +
>  static const struct xfs_item_ops xfs_efi_item_ops = {
>  	.iop_size	= xfs_efi_item_size,
>  	.iop_format	= xfs_efi_item_format,
> @@ -649,6 +677,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
>  	.iop_release	= xfs_efi_item_release,
>  	.iop_recover	= xfs_efi_item_recover,
>  	.iop_match	= xfs_efi_item_match,
> +	.iop_relog	= xfs_efi_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
> index ad895b48f365..7529eb63ce94 100644
> --- a/fs/xfs/xfs_refcount_item.c
> +++ b/fs/xfs/xfs_refcount_item.c
> @@ -560,6 +560,32 @@ xfs_cui_item_match(
>  	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_cui_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_cud_log_item		*cudp;
> +	struct xfs_cui_log_item		*cuip;
> +	struct xfs_phys_extent		*extp;
> +	unsigned int			count;
> +
> +	count = CUI_ITEM(intent)->cui_format.cui_nextents;
> +	extp = CUI_ITEM(intent)->cui_format.cui_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
> +	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
> +
> +	cuip = xfs_cui_init(tp->t_mountp, count);
> +	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
> +	atomic_set(&cuip->cui_next_extent, count);
> +	xfs_trans_add_item(tp, &cuip->cui_item);
> +	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
> +	return &cuip->cui_item;
> +}
> +
>  static const struct xfs_item_ops xfs_cui_item_ops = {
>  	.iop_size	= xfs_cui_item_size,
>  	.iop_format	= xfs_cui_item_format,
> @@ -567,6 +593,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
>  	.iop_release	= xfs_cui_item_release,
>  	.iop_recover	= xfs_cui_item_recover,
>  	.iop_match	= xfs_cui_item_match,
> +	.iop_relog	= xfs_cui_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
> index 1163f32c3e62..7adc996ca6e3 100644
> --- a/fs/xfs/xfs_rmap_item.c
> +++ b/fs/xfs/xfs_rmap_item.c
> @@ -583,6 +583,32 @@ xfs_rui_item_match(
>  	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_rui_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_rud_log_item		*rudp;
> +	struct xfs_rui_log_item		*ruip;
> +	struct xfs_map_extent		*extp;
> +	unsigned int			count;
> +
> +	count = RUI_ITEM(intent)->rui_format.rui_nextents;
> +	extp = RUI_ITEM(intent)->rui_format.rui_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
> +	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
> +
> +	ruip = xfs_rui_init(tp->t_mountp, count);
> +	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
> +	atomic_set(&ruip->rui_next_extent, count);
> +	xfs_trans_add_item(tp, &ruip->rui_item);
> +	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
> +	return &ruip->rui_item;
> +}
> +
>  static const struct xfs_item_ops xfs_rui_item_ops = {
>  	.iop_size	= xfs_rui_item_size,
>  	.iop_format	= xfs_rui_item_format,
> @@ -590,6 +616,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
>  	.iop_release	= xfs_rui_item_release,
>  	.iop_recover	= xfs_rui_item_recover,
>  	.iop_match	= xfs_rui_item_match,
> +	.iop_relog	= xfs_rui_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_stats.c b/fs/xfs/xfs_stats.c
> index f70f1255220b..20e0534a772c 100644
> --- a/fs/xfs/xfs_stats.c
> +++ b/fs/xfs/xfs_stats.c
> @@ -23,6 +23,7 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
>  	uint64_t	xs_xstrat_bytes = 0;
>  	uint64_t	xs_write_bytes = 0;
>  	uint64_t	xs_read_bytes = 0;
> +	uint64_t	defer_relog = 0;
>  
>  	static const struct xstats_entry {
>  		char	*desc;
> @@ -70,10 +71,13 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
>  		xs_xstrat_bytes += per_cpu_ptr(stats, i)->s.xs_xstrat_bytes;
>  		xs_write_bytes += per_cpu_ptr(stats, i)->s.xs_write_bytes;
>  		xs_read_bytes += per_cpu_ptr(stats, i)->s.xs_read_bytes;
> +		defer_relog += per_cpu_ptr(stats, i)->s.defer_relog;
>  	}
>  
>  	len += scnprintf(buf + len, PATH_MAX-len, "xpc %Lu %Lu %Lu\n",
>  			xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
> +	len += scnprintf(buf + len, PATH_MAX-len, "defer_relog %llu\n",
> +			defer_relog);
>  	len += scnprintf(buf + len, PATH_MAX-len, "debug %u\n",
>  #if defined(DEBUG)
>  		1);
> diff --git a/fs/xfs/xfs_stats.h b/fs/xfs/xfs_stats.h
> index 34d704f703d2..43ffba74f045 100644
> --- a/fs/xfs/xfs_stats.h
> +++ b/fs/xfs/xfs_stats.h
> @@ -137,6 +137,7 @@ struct __xfsstats {
>  	uint64_t		xs_xstrat_bytes;
>  	uint64_t		xs_write_bytes;
>  	uint64_t		xs_read_bytes;
> +	uint64_t		defer_relog;
>  };
>  
>  #define	xfsstats_offset(f)	(offsetof(struct __xfsstats, f)/sizeof(uint32_t))
> diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
> index dcdcf99cfa5d..86951652d3ed 100644
> --- a/fs/xfs/xfs_trace.h
> +++ b/fs/xfs/xfs_trace.h
> @@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
>  DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
>  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
>  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
> +DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
>  
>  #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
>  DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
> diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
> index 186e77d08cc1..084658946cc8 100644
> --- a/fs/xfs/xfs_trans.h
> +++ b/fs/xfs/xfs_trans.h
> @@ -75,6 +75,8 @@ struct xfs_item_ops {
>  	int (*iop_recover)(struct xfs_log_item *lip,
>  			   struct list_head *capture_list);
>  	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
> +	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
> +			struct xfs_trans *tp);
>  };
>  
>  /* Is this log item a deferred action intent? */
> @@ -258,4 +260,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
>  
>  extern kmem_zone_t	*xfs_trans_zone;
>  
> +static inline struct xfs_log_item *
> +xfs_trans_item_relog(
> +	struct xfs_log_item	*lip,
> +	struct xfs_trans	*tp)
> +{
> +	return lip->li_ops->iop_relog(lip, tp);
> +}
> +
>  #endif	/* __XFS_TRANS_H__ */
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low
  2020-09-29 17:44 ` [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low Darrick J. Wong
@ 2020-10-01 16:02   ` Brian Foster
  2020-10-01 17:33     ` Darrick J. Wong
  0 siblings, 1 reply; 12+ messages in thread
From: Brian Foster @ 2020-10-01 16:02 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, david

On Tue, Sep 29, 2020 at 10:44:35AM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Now that we have the ability to ask the log how far the tail needs to be
> pushed to maintain its free space targets, augment the decision to relog
> an intent item so that we only do it if the log has hit the 75% full
> threshold.  There's no point in relogging an intent into the same
> checkpoint, and there's no need to relog if there's plenty of free space
> in the log.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_defer.c |   16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 554777d1069c..2ba02f2e59a1 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -356,7 +356,10 @@ xfs_defer_relog(
>  	struct xfs_trans		**tpp,
>  	struct list_head		*dfops)
>  {
> +	struct xlog			*log = (*tpp)->t_mountp->m_log;
>  	struct xfs_defer_pending	*dfp;
> +	xfs_lsn_t			threshold_lsn = NULLCOMMITLSN;
> +
>  
>  	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
>  
> @@ -372,6 +375,19 @@ xfs_defer_relog(
>  		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
>  			continue;
>  
> +		/*
> +		 * Figure out where we need the tail to be in order to maintain
> +		 * the minimum required free space in the log.  Only sample
> +		 * the log threshold once per call.
> +		 */
> +		if (threshold_lsn == NULLCOMMITLSN) {
> +			threshold_lsn = xlog_grant_push_threshold(log, 0);
> +			if (threshold_lsn == NULLCOMMITLSN)
> +				break;
> +		}

FWIW, this looks slightly different from the referenced repo again. :P
It might be good practice to create a -v# branch for patches sent to the
list and to keep that one stable for the associated review cycle.

That aside, I'm not quite clear where we stand with this patch. My
preference was to keep it unless there was some fundamental correctness
issue that I'm not aware of. I think your and Dave's preference was to
drop it. So either way, for posterity:

Reviewed-by: Brian Foster <bfoster@redhat.com>

> +		if (XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) >= 0)
> +			continue;
> +
>  		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
>  		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
>  		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] xfs: periodically relog deferred intent items
  2020-10-01 16:02   ` Brian Foster
@ 2020-10-01 17:22     ` Darrick J. Wong
  2020-10-01 17:50       ` Brian Foster
  0 siblings, 1 reply; 12+ messages in thread
From: Darrick J. Wong @ 2020-10-01 17:22 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs, david

On Thu, Oct 01, 2020 at 12:02:02PM -0400, Brian Foster wrote:
> On Tue, Sep 29, 2020 at 10:44:22AM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > There's a subtle design flaw in the deferred log item code that can lead
> > to pinning the log tail.  Taking up the defer ops chain examples from
> > the previous commit, we can get trapped in sequences like this:
> > 
> > Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> > chain will look like the following if the transaction rolls succeed:
> > 
> > t1: D0(t0), D1(t0), D2(t0), D3(t0)
> > t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> > t3: d5(t1), D1(t0), D2(t0), D3(t0)
> > ...
> > t9: d9(t7), D3(t0)
> > t10: D3(t0)
> > t11: d10(t10), d11(t10)
> > t12: d11(t10)
> > 
> > In transaction 9, we finish d9 and try to roll to t10 while holding onto
> > an intent item for D3 that we logged in t0.
> > 
> > The previous commit changed the order in which we place new defer ops in
> > the defer ops processing chain to reduce the maximum chain length.  Now
> > make xfs_defer_finish_noroll capable of relogging the entire chain
> > periodically so that we can always move the log tail forward.  Most
> > chains will never get relogged, except for operations that generate very
> > long chains (large extents containing many blocks with different sharing
> > levels) or are on filesystems with small logs and a lot of ongoing
> > metadata updates.
> > 
> > Callers are now required to ensure that the transaction reservation is
> > large enough to handle logging done items and new intent items for the
> > maximum possible chain length.  Most callers are careful to keep the
> > chain lengths low, so the overhead should be minimal.
> > 
> > The decision to relog an intent item is made based on whether the intent
> > was logged in a previous checkpoint, since there's no point in relogging
> > an intent into the same checkpoint.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/libxfs/xfs_defer.c  |   42 ++++++++++++++++++++++++++++++++++++++++++
> >  fs/xfs/xfs_bmap_item.c     |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
> >  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_stats.c         |    4 ++++
> >  fs/xfs/xfs_stats.h         |    1 +
> >  fs/xfs/xfs_trace.h         |    1 +
> >  fs/xfs/xfs_trans.h         |   10 ++++++++++
> >  9 files changed, 168 insertions(+)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > index 81084c34138f..554777d1069c 100644
> > --- a/fs/xfs/libxfs/xfs_defer.c
> > +++ b/fs/xfs/libxfs/xfs_defer.c
> > @@ -17,6 +17,7 @@
> >  #include "xfs_inode_item.h"
> >  #include "xfs_trace.h"
> >  #include "xfs_icache.h"
> > +#include "xfs_log.h"
> >  
> >  /*
> >   * Deferred Operations in XFS
> > @@ -345,6 +346,42 @@ xfs_defer_cancel_list(
> >  	}
> >  }
> >  
> > +/*
> > + * Prevent a log intent item from pinning the tail of the log by logging a
> > + * done item to release the intent item; and then log a new intent item.
> > + * The caller should provide a fresh transaction and roll it after we're done.
> > + */
> > +static int
> > +xfs_defer_relog(
> > +	struct xfs_trans		**tpp,
> > +	struct list_head		*dfops)
> > +{
> > +	struct xfs_defer_pending	*dfp;
> > +
> > +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> > +
> > +	list_for_each_entry(dfp, dfops, dfp_list) {
> > +		/*
> > +		 * If the log intent item for this deferred op is not a part of
> > +		 * the current log checkpoint, relog the intent item to keep
> > +		 * the log tail moving forward.  We're ok with this being racy
> > +		 * because an incorrect decision means we'll be a little slower
> > +		 * at pushing the tail.
> > +		 */
> > +		if (dfp->dfp_intent == NULL ||
> > +		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
> > +			continue;
> > +
> 
> As discussed in the previous thread and on IRC, I'm a little concerned
> that this could result in undesireable behavior in certain cases. For
> example, if the chain grows too large for a transaction (since we don't
> explicitly account intents in transaction reservation afaict), if we
> have many of them hammering the log all at once, etc.

I think if the chain gets too large we'll know immediately since we'll
trip the assertions on the transaction reservation.  The reservations
themselves are very <cough> generous, and the defer ops mechanism
ensures that we always make forward progress by finishing at least one
item between xfs_defer_relog.

AFAICT, if you have multiple threads hammering the log with long intent
chains and others threads forcing log checkpoints, the worst that
happens is that the logger threads will slow to a crawl now that they
have to relog a ton of intents into the new checkpoint (in addition to
the overhead of the log relogging buffers into the new checkpoint).  The
push threshold logic mitigates a lot of that.

> That said, I don't
> recall whether the previous versions included the transaction roll below
> after relogging one or more intents or I just missed it, but that
> mitigates one of my concerns because I was thinking we'd log the chain
> in the same transaction used for the next dfop on the list.

The roll at the end of the function has been there since I started
working on this series, though iirc the very first version did the roll
unconditionally.

> In any event, I'm fine with it if this is preferred and has been
> sufficiently stress tested:

I've been running with some variant of this patch (mostly the log push
thresholding one) for several months without issues.  The checkpoint
comparison part is new though.

> Reviewed-by: Brian Foster <bfoster@redhat.com>

Thanks for the review. :)

--D

> 
> > +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> > +		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
> > +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> > +	}
> > +
> > +	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
> > +		return xfs_defer_trans_roll(tpp);
> > +	return 0;
> > +}
> > +
> >  /*
> >   * Log an intent-done item for the first pending intent, and finish the work
> >   * items.
> > @@ -431,6 +468,11 @@ xfs_defer_finish_noroll(
> >  		if (error)
> >  			goto out_shutdown;
> >  
> > +		/* Possibly relog intent items to keep the log moving. */
> > +		error = xfs_defer_relog(tp, &dop_pending);
> > +		if (error)
> > +			goto out_shutdown;
> > +
> >  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
> >  				       dfp_list);
> >  		error = xfs_defer_finish_one(*tp, dfp);
> > diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
> > index 0ffbc75bafe1..1cca93d3d743 100644
> > --- a/fs/xfs/xfs_bmap_item.c
> > +++ b/fs/xfs/xfs_bmap_item.c
> > @@ -532,6 +532,32 @@ xfs_bui_item_match(
> >  	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_bui_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_bud_log_item		*budp;
> > +	struct xfs_bui_log_item		*buip;
> > +	struct xfs_map_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = BUI_ITEM(intent)->bui_format.bui_nextents;
> > +	extp = BUI_ITEM(intent)->bui_format.bui_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
> > +	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
> > +
> > +	buip = xfs_bui_init(tp->t_mountp);
> > +	memcpy(buip->bui_format.bui_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&buip->bui_next_extent, count);
> > +	xfs_trans_add_item(tp, &buip->bui_item);
> > +	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
> > +	return &buip->bui_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_bui_item_ops = {
> >  	.iop_size	= xfs_bui_item_size,
> >  	.iop_format	= xfs_bui_item_format,
> > @@ -539,6 +565,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
> >  	.iop_release	= xfs_bui_item_release,
> >  	.iop_recover	= xfs_bui_item_recover,
> >  	.iop_match	= xfs_bui_item_match,
> > +	.iop_relog	= xfs_bui_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
> > index 3920542f5736..6c11bfc3d452 100644
> > --- a/fs/xfs/xfs_extfree_item.c
> > +++ b/fs/xfs/xfs_extfree_item.c
> > @@ -642,6 +642,34 @@ xfs_efi_item_match(
> >  	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_efi_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_efd_log_item		*efdp;
> > +	struct xfs_efi_log_item		*efip;
> > +	struct xfs_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = EFI_ITEM(intent)->efi_format.efi_nextents;
> > +	extp = EFI_ITEM(intent)->efi_format.efi_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
> > +	efdp->efd_next_extent = count;
> > +	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
> > +	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
> > +
> > +	efip = xfs_efi_init(tp->t_mountp, count);
> > +	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&efip->efi_next_extent, count);
> > +	xfs_trans_add_item(tp, &efip->efi_item);
> > +	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
> > +	return &efip->efi_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_efi_item_ops = {
> >  	.iop_size	= xfs_efi_item_size,
> >  	.iop_format	= xfs_efi_item_format,
> > @@ -649,6 +677,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
> >  	.iop_release	= xfs_efi_item_release,
> >  	.iop_recover	= xfs_efi_item_recover,
> >  	.iop_match	= xfs_efi_item_match,
> > +	.iop_relog	= xfs_efi_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
> > index ad895b48f365..7529eb63ce94 100644
> > --- a/fs/xfs/xfs_refcount_item.c
> > +++ b/fs/xfs/xfs_refcount_item.c
> > @@ -560,6 +560,32 @@ xfs_cui_item_match(
> >  	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_cui_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_cud_log_item		*cudp;
> > +	struct xfs_cui_log_item		*cuip;
> > +	struct xfs_phys_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = CUI_ITEM(intent)->cui_format.cui_nextents;
> > +	extp = CUI_ITEM(intent)->cui_format.cui_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
> > +	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
> > +
> > +	cuip = xfs_cui_init(tp->t_mountp, count);
> > +	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&cuip->cui_next_extent, count);
> > +	xfs_trans_add_item(tp, &cuip->cui_item);
> > +	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
> > +	return &cuip->cui_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_cui_item_ops = {
> >  	.iop_size	= xfs_cui_item_size,
> >  	.iop_format	= xfs_cui_item_format,
> > @@ -567,6 +593,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
> >  	.iop_release	= xfs_cui_item_release,
> >  	.iop_recover	= xfs_cui_item_recover,
> >  	.iop_match	= xfs_cui_item_match,
> > +	.iop_relog	= xfs_cui_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
> > index 1163f32c3e62..7adc996ca6e3 100644
> > --- a/fs/xfs/xfs_rmap_item.c
> > +++ b/fs/xfs/xfs_rmap_item.c
> > @@ -583,6 +583,32 @@ xfs_rui_item_match(
> >  	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_rui_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_rud_log_item		*rudp;
> > +	struct xfs_rui_log_item		*ruip;
> > +	struct xfs_map_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = RUI_ITEM(intent)->rui_format.rui_nextents;
> > +	extp = RUI_ITEM(intent)->rui_format.rui_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
> > +	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
> > +
> > +	ruip = xfs_rui_init(tp->t_mountp, count);
> > +	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&ruip->rui_next_extent, count);
> > +	xfs_trans_add_item(tp, &ruip->rui_item);
> > +	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
> > +	return &ruip->rui_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_rui_item_ops = {
> >  	.iop_size	= xfs_rui_item_size,
> >  	.iop_format	= xfs_rui_item_format,
> > @@ -590,6 +616,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
> >  	.iop_release	= xfs_rui_item_release,
> >  	.iop_recover	= xfs_rui_item_recover,
> >  	.iop_match	= xfs_rui_item_match,
> > +	.iop_relog	= xfs_rui_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_stats.c b/fs/xfs/xfs_stats.c
> > index f70f1255220b..20e0534a772c 100644
> > --- a/fs/xfs/xfs_stats.c
> > +++ b/fs/xfs/xfs_stats.c
> > @@ -23,6 +23,7 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
> >  	uint64_t	xs_xstrat_bytes = 0;
> >  	uint64_t	xs_write_bytes = 0;
> >  	uint64_t	xs_read_bytes = 0;
> > +	uint64_t	defer_relog = 0;
> >  
> >  	static const struct xstats_entry {
> >  		char	*desc;
> > @@ -70,10 +71,13 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
> >  		xs_xstrat_bytes += per_cpu_ptr(stats, i)->s.xs_xstrat_bytes;
> >  		xs_write_bytes += per_cpu_ptr(stats, i)->s.xs_write_bytes;
> >  		xs_read_bytes += per_cpu_ptr(stats, i)->s.xs_read_bytes;
> > +		defer_relog += per_cpu_ptr(stats, i)->s.defer_relog;
> >  	}
> >  
> >  	len += scnprintf(buf + len, PATH_MAX-len, "xpc %Lu %Lu %Lu\n",
> >  			xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
> > +	len += scnprintf(buf + len, PATH_MAX-len, "defer_relog %llu\n",
> > +			defer_relog);
> >  	len += scnprintf(buf + len, PATH_MAX-len, "debug %u\n",
> >  #if defined(DEBUG)
> >  		1);
> > diff --git a/fs/xfs/xfs_stats.h b/fs/xfs/xfs_stats.h
> > index 34d704f703d2..43ffba74f045 100644
> > --- a/fs/xfs/xfs_stats.h
> > +++ b/fs/xfs/xfs_stats.h
> > @@ -137,6 +137,7 @@ struct __xfsstats {
> >  	uint64_t		xs_xstrat_bytes;
> >  	uint64_t		xs_write_bytes;
> >  	uint64_t		xs_read_bytes;
> > +	uint64_t		defer_relog;
> >  };
> >  
> >  #define	xfsstats_offset(f)	(offsetof(struct __xfsstats, f)/sizeof(uint32_t))
> > diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
> > index dcdcf99cfa5d..86951652d3ed 100644
> > --- a/fs/xfs/xfs_trace.h
> > +++ b/fs/xfs/xfs_trace.h
> > @@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
> >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
> >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
> >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
> > +DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
> >  
> >  #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
> >  DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
> > diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
> > index 186e77d08cc1..084658946cc8 100644
> > --- a/fs/xfs/xfs_trans.h
> > +++ b/fs/xfs/xfs_trans.h
> > @@ -75,6 +75,8 @@ struct xfs_item_ops {
> >  	int (*iop_recover)(struct xfs_log_item *lip,
> >  			   struct list_head *capture_list);
> >  	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
> > +	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
> > +			struct xfs_trans *tp);
> >  };
> >  
> >  /* Is this log item a deferred action intent? */
> > @@ -258,4 +260,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
> >  
> >  extern kmem_zone_t	*xfs_trans_zone;
> >  
> > +static inline struct xfs_log_item *
> > +xfs_trans_item_relog(
> > +	struct xfs_log_item	*lip,
> > +	struct xfs_trans	*tp)
> > +{
> > +	return lip->li_ops->iop_relog(lip, tp);
> > +}
> > +
> >  #endif	/* __XFS_TRANS_H__ */
> > 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low
  2020-10-01 16:02   ` Brian Foster
@ 2020-10-01 17:33     ` Darrick J. Wong
  0 siblings, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2020-10-01 17:33 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs, david

On Thu, Oct 01, 2020 at 12:02:56PM -0400, Brian Foster wrote:
> On Tue, Sep 29, 2020 at 10:44:35AM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Now that we have the ability to ask the log how far the tail needs to be
> > pushed to maintain its free space targets, augment the decision to relog
> > an intent item so that we only do it if the log has hit the 75% full
> > threshold.  There's no point in relogging an intent into the same
> > checkpoint, and there's no need to relog if there's plenty of free space
> > in the log.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/libxfs/xfs_defer.c |   16 ++++++++++++++++
> >  1 file changed, 16 insertions(+)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > index 554777d1069c..2ba02f2e59a1 100644
> > --- a/fs/xfs/libxfs/xfs_defer.c
> > +++ b/fs/xfs/libxfs/xfs_defer.c
> > @@ -356,7 +356,10 @@ xfs_defer_relog(
> >  	struct xfs_trans		**tpp,
> >  	struct list_head		*dfops)
> >  {
> > +	struct xlog			*log = (*tpp)->t_mountp->m_log;
> >  	struct xfs_defer_pending	*dfp;
> > +	xfs_lsn_t			threshold_lsn = NULLCOMMITLSN;
> > +
> >  
> >  	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> >  
> > @@ -372,6 +375,19 @@ xfs_defer_relog(
> >  		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
> >  			continue;
> >  
> > +		/*
> > +		 * Figure out where we need the tail to be in order to maintain
> > +		 * the minimum required free space in the log.  Only sample
> > +		 * the log threshold once per call.
> > +		 */
> > +		if (threshold_lsn == NULLCOMMITLSN) {
> > +			threshold_lsn = xlog_grant_push_threshold(log, 0);
> > +			if (threshold_lsn == NULLCOMMITLSN)
> > +				break;
> > +		}
> 
> FWIW, this looks slightly different from the referenced repo again. :P
> It might be good practice to create a -v# branch for patches sent to the
> list and to keep that one stable for the associated review cycle.

I did, as we talked about on irc.  For anyone following along at home:

https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/tag/?h=defer-ops-stalls-5.10_2020-09-29

> That aside, I'm not quite clear where we stand with this patch. My
> preference was to keep it unless there was some fundamental correctness
> issue that I'm not aware of. I think your and Dave's preference was to
> drop it. So either way, for posterity:

As the author who keeps spraying this patch onto the list, I'm still of
a mind to include it. :)  Since we don't really need to relog the
intents unless we need to move the log forward.

That said, even my attempts to force worst case scenarios (make a few
hundred of the big-extent-partially-shared files, truncate them all at
the same time while repeatedly fsyncing (or freezing the fs) still shows
that the relog counts are a tiny percentage of the total transactions
run.  I guess I'll try a higher AG count fs next...

--D

> 
> Reviewed-by: Brian Foster <bfoster@redhat.com>
> 
> > +		if (XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) >= 0)
> > +			continue;
> > +
> >  		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> >  		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
> >  		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> > 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/4] xfs: periodically relog deferred intent items
  2020-10-01 17:22     ` Darrick J. Wong
@ 2020-10-01 17:50       ` Brian Foster
  0 siblings, 0 replies; 12+ messages in thread
From: Brian Foster @ 2020-10-01 17:50 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, david

On Thu, Oct 01, 2020 at 10:22:22AM -0700, Darrick J. Wong wrote:
> On Thu, Oct 01, 2020 at 12:02:02PM -0400, Brian Foster wrote:
> > On Tue, Sep 29, 2020 at 10:44:22AM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <darrick.wong@oracle.com>
> > > 
> > > There's a subtle design flaw in the deferred log item code that can lead
> > > to pinning the log tail.  Taking up the defer ops chain examples from
> > > the previous commit, we can get trapped in sequences like this:
> > > 
> > > Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> > > chain will look like the following if the transaction rolls succeed:
> > > 
> > > t1: D0(t0), D1(t0), D2(t0), D3(t0)
> > > t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> > > t3: d5(t1), D1(t0), D2(t0), D3(t0)
> > > ...
> > > t9: d9(t7), D3(t0)
> > > t10: D3(t0)
> > > t11: d10(t10), d11(t10)
> > > t12: d11(t10)
> > > 
> > > In transaction 9, we finish d9 and try to roll to t10 while holding onto
> > > an intent item for D3 that we logged in t0.
> > > 
> > > The previous commit changed the order in which we place new defer ops in
> > > the defer ops processing chain to reduce the maximum chain length.  Now
> > > make xfs_defer_finish_noroll capable of relogging the entire chain
> > > periodically so that we can always move the log tail forward.  Most
> > > chains will never get relogged, except for operations that generate very
> > > long chains (large extents containing many blocks with different sharing
> > > levels) or are on filesystems with small logs and a lot of ongoing
> > > metadata updates.
> > > 
> > > Callers are now required to ensure that the transaction reservation is
> > > large enough to handle logging done items and new intent items for the
> > > maximum possible chain length.  Most callers are careful to keep the
> > > chain lengths low, so the overhead should be minimal.
> > > 
> > > The decision to relog an intent item is made based on whether the intent
> > > was logged in a previous checkpoint, since there's no point in relogging
> > > an intent into the same checkpoint.
> > > 
> > > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > ---
> > >  fs/xfs/libxfs/xfs_defer.c  |   42 ++++++++++++++++++++++++++++++++++++++++++
> > >  fs/xfs/xfs_bmap_item.c     |   27 +++++++++++++++++++++++++++
> > >  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
> > >  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
> > >  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
> > >  fs/xfs/xfs_stats.c         |    4 ++++
> > >  fs/xfs/xfs_stats.h         |    1 +
> > >  fs/xfs/xfs_trace.h         |    1 +
> > >  fs/xfs/xfs_trans.h         |   10 ++++++++++
> > >  9 files changed, 168 insertions(+)
> > > 
> > > 
> > > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > > index 81084c34138f..554777d1069c 100644
> > > --- a/fs/xfs/libxfs/xfs_defer.c
> > > +++ b/fs/xfs/libxfs/xfs_defer.c
> > > @@ -17,6 +17,7 @@
> > >  #include "xfs_inode_item.h"
> > >  #include "xfs_trace.h"
> > >  #include "xfs_icache.h"
> > > +#include "xfs_log.h"
> > >  
> > >  /*
> > >   * Deferred Operations in XFS
> > > @@ -345,6 +346,42 @@ xfs_defer_cancel_list(
> > >  	}
> > >  }
> > >  
> > > +/*
> > > + * Prevent a log intent item from pinning the tail of the log by logging a
> > > + * done item to release the intent item; and then log a new intent item.
> > > + * The caller should provide a fresh transaction and roll it after we're done.
> > > + */
> > > +static int
> > > +xfs_defer_relog(
> > > +	struct xfs_trans		**tpp,
> > > +	struct list_head		*dfops)
> > > +{
> > > +	struct xfs_defer_pending	*dfp;
> > > +
> > > +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> > > +
> > > +	list_for_each_entry(dfp, dfops, dfp_list) {
> > > +		/*
> > > +		 * If the log intent item for this deferred op is not a part of
> > > +		 * the current log checkpoint, relog the intent item to keep
> > > +		 * the log tail moving forward.  We're ok with this being racy
> > > +		 * because an incorrect decision means we'll be a little slower
> > > +		 * at pushing the tail.
> > > +		 */
> > > +		if (dfp->dfp_intent == NULL ||
> > > +		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
> > > +			continue;
> > > +
> > 
> > As discussed in the previous thread and on IRC, I'm a little concerned
> > that this could result in undesireable behavior in certain cases. For
> > example, if the chain grows too large for a transaction (since we don't
> > explicitly account intents in transaction reservation afaict), if we
> > have many of them hammering the log all at once, etc.
> 
> I think if the chain gets too large we'll know immediately since we'll
> trip the assertions on the transaction reservation.  The reservations
> themselves are very <cough> generous, and the defer ops mechanism
> ensures that we always make forward progress by finishing at least one
> item between xfs_defer_relog.
> 

Well we'll ultimately shut down if a transaction commit overruns its
reservation, which is why it's not something best left found by users. I
know that at least some transaction reservations are rather generous,
but it also seems like some are based on ancient lore and simply don't
change because they work in practice (i.e. we don't overrun them and
they otherwise aren't too large to cause noticeable performance
problems). If we implement more schemes that effectively rely on bloated
reservations, that's probably something we should at least keep in mind
when/if we ever try to trim them down or make them more "accurate." It
might even be worth documenting that for whichever transaction(s) are
susceptible to large chains.

> AFAICT, if you have multiple threads hammering the log with long intent
> chains and others threads forcing log checkpoints, the worst that
> happens is that the logger threads will slow to a crawl now that they
> have to relog a ton of intents into the new checkpoint (in addition to
> the overhead of the log relogging buffers into the new checkpoint).  The
> push threshold logic mitigates a lot of that.
> 

Yep.

> > That said, I don't
> > recall whether the previous versions included the transaction roll below
> > after relogging one or more intents or I just missed it, but that
> > mitigates one of my concerns because I was thinking we'd log the chain
> > in the same transaction used for the next dfop on the list.
> 
> The roll at the end of the function has been there since I started
> working on this series, though iirc the very first version did the roll
> unconditionally.
> 

Ok, I must have just glossed over it.

Brian

> > In any event, I'm fine with it if this is preferred and has been
> > sufficiently stress tested:
> 
> I've been running with some variant of this patch (mostly the log push
> thresholding one) for several months without issues.  The checkpoint
> comparison part is new though.
> 
> > Reviewed-by: Brian Foster <bfoster@redhat.com>
> 
> Thanks for the review. :)
> 
> --D
> 
> > 
> > > +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> > > +		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
> > > +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> > > +	}
> > > +
> > > +	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
> > > +		return xfs_defer_trans_roll(tpp);
> > > +	return 0;
> > > +}
> > > +
> > >  /*
> > >   * Log an intent-done item for the first pending intent, and finish the work
> > >   * items.
> > > @@ -431,6 +468,11 @@ xfs_defer_finish_noroll(
> > >  		if (error)
> > >  			goto out_shutdown;
> > >  
> > > +		/* Possibly relog intent items to keep the log moving. */
> > > +		error = xfs_defer_relog(tp, &dop_pending);
> > > +		if (error)
> > > +			goto out_shutdown;
> > > +
> > >  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
> > >  				       dfp_list);
> > >  		error = xfs_defer_finish_one(*tp, dfp);
> > > diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
> > > index 0ffbc75bafe1..1cca93d3d743 100644
> > > --- a/fs/xfs/xfs_bmap_item.c
> > > +++ b/fs/xfs/xfs_bmap_item.c
> > > @@ -532,6 +532,32 @@ xfs_bui_item_match(
> > >  	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
> > >  }
> > >  
> > > +/* Relog an intent item to push the log tail forward. */
> > > +static struct xfs_log_item *
> > > +xfs_bui_item_relog(
> > > +	struct xfs_log_item		*intent,
> > > +	struct xfs_trans		*tp)
> > > +{
> > > +	struct xfs_bud_log_item		*budp;
> > > +	struct xfs_bui_log_item		*buip;
> > > +	struct xfs_map_extent		*extp;
> > > +	unsigned int			count;
> > > +
> > > +	count = BUI_ITEM(intent)->bui_format.bui_nextents;
> > > +	extp = BUI_ITEM(intent)->bui_format.bui_extents;
> > > +
> > > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > > +	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
> > > +	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
> > > +
> > > +	buip = xfs_bui_init(tp->t_mountp);
> > > +	memcpy(buip->bui_format.bui_extents, extp, count * sizeof(*extp));
> > > +	atomic_set(&buip->bui_next_extent, count);
> > > +	xfs_trans_add_item(tp, &buip->bui_item);
> > > +	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
> > > +	return &buip->bui_item;
> > > +}
> > > +
> > >  static const struct xfs_item_ops xfs_bui_item_ops = {
> > >  	.iop_size	= xfs_bui_item_size,
> > >  	.iop_format	= xfs_bui_item_format,
> > > @@ -539,6 +565,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
> > >  	.iop_release	= xfs_bui_item_release,
> > >  	.iop_recover	= xfs_bui_item_recover,
> > >  	.iop_match	= xfs_bui_item_match,
> > > +	.iop_relog	= xfs_bui_item_relog,
> > >  };
> > >  
> > >  /*
> > > diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
> > > index 3920542f5736..6c11bfc3d452 100644
> > > --- a/fs/xfs/xfs_extfree_item.c
> > > +++ b/fs/xfs/xfs_extfree_item.c
> > > @@ -642,6 +642,34 @@ xfs_efi_item_match(
> > >  	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
> > >  }
> > >  
> > > +/* Relog an intent item to push the log tail forward. */
> > > +static struct xfs_log_item *
> > > +xfs_efi_item_relog(
> > > +	struct xfs_log_item		*intent,
> > > +	struct xfs_trans		*tp)
> > > +{
> > > +	struct xfs_efd_log_item		*efdp;
> > > +	struct xfs_efi_log_item		*efip;
> > > +	struct xfs_extent		*extp;
> > > +	unsigned int			count;
> > > +
> > > +	count = EFI_ITEM(intent)->efi_format.efi_nextents;
> > > +	extp = EFI_ITEM(intent)->efi_format.efi_extents;
> > > +
> > > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > > +	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
> > > +	efdp->efd_next_extent = count;
> > > +	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
> > > +	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
> > > +
> > > +	efip = xfs_efi_init(tp->t_mountp, count);
> > > +	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
> > > +	atomic_set(&efip->efi_next_extent, count);
> > > +	xfs_trans_add_item(tp, &efip->efi_item);
> > > +	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
> > > +	return &efip->efi_item;
> > > +}
> > > +
> > >  static const struct xfs_item_ops xfs_efi_item_ops = {
> > >  	.iop_size	= xfs_efi_item_size,
> > >  	.iop_format	= xfs_efi_item_format,
> > > @@ -649,6 +677,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
> > >  	.iop_release	= xfs_efi_item_release,
> > >  	.iop_recover	= xfs_efi_item_recover,
> > >  	.iop_match	= xfs_efi_item_match,
> > > +	.iop_relog	= xfs_efi_item_relog,
> > >  };
> > >  
> > >  /*
> > > diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
> > > index ad895b48f365..7529eb63ce94 100644
> > > --- a/fs/xfs/xfs_refcount_item.c
> > > +++ b/fs/xfs/xfs_refcount_item.c
> > > @@ -560,6 +560,32 @@ xfs_cui_item_match(
> > >  	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
> > >  }
> > >  
> > > +/* Relog an intent item to push the log tail forward. */
> > > +static struct xfs_log_item *
> > > +xfs_cui_item_relog(
> > > +	struct xfs_log_item		*intent,
> > > +	struct xfs_trans		*tp)
> > > +{
> > > +	struct xfs_cud_log_item		*cudp;
> > > +	struct xfs_cui_log_item		*cuip;
> > > +	struct xfs_phys_extent		*extp;
> > > +	unsigned int			count;
> > > +
> > > +	count = CUI_ITEM(intent)->cui_format.cui_nextents;
> > > +	extp = CUI_ITEM(intent)->cui_format.cui_extents;
> > > +
> > > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > > +	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
> > > +	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
> > > +
> > > +	cuip = xfs_cui_init(tp->t_mountp, count);
> > > +	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
> > > +	atomic_set(&cuip->cui_next_extent, count);
> > > +	xfs_trans_add_item(tp, &cuip->cui_item);
> > > +	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
> > > +	return &cuip->cui_item;
> > > +}
> > > +
> > >  static const struct xfs_item_ops xfs_cui_item_ops = {
> > >  	.iop_size	= xfs_cui_item_size,
> > >  	.iop_format	= xfs_cui_item_format,
> > > @@ -567,6 +593,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
> > >  	.iop_release	= xfs_cui_item_release,
> > >  	.iop_recover	= xfs_cui_item_recover,
> > >  	.iop_match	= xfs_cui_item_match,
> > > +	.iop_relog	= xfs_cui_item_relog,
> > >  };
> > >  
> > >  /*
> > > diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
> > > index 1163f32c3e62..7adc996ca6e3 100644
> > > --- a/fs/xfs/xfs_rmap_item.c
> > > +++ b/fs/xfs/xfs_rmap_item.c
> > > @@ -583,6 +583,32 @@ xfs_rui_item_match(
> > >  	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
> > >  }
> > >  
> > > +/* Relog an intent item to push the log tail forward. */
> > > +static struct xfs_log_item *
> > > +xfs_rui_item_relog(
> > > +	struct xfs_log_item		*intent,
> > > +	struct xfs_trans		*tp)
> > > +{
> > > +	struct xfs_rud_log_item		*rudp;
> > > +	struct xfs_rui_log_item		*ruip;
> > > +	struct xfs_map_extent		*extp;
> > > +	unsigned int			count;
> > > +
> > > +	count = RUI_ITEM(intent)->rui_format.rui_nextents;
> > > +	extp = RUI_ITEM(intent)->rui_format.rui_extents;
> > > +
> > > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > > +	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
> > > +	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
> > > +
> > > +	ruip = xfs_rui_init(tp->t_mountp, count);
> > > +	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
> > > +	atomic_set(&ruip->rui_next_extent, count);
> > > +	xfs_trans_add_item(tp, &ruip->rui_item);
> > > +	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
> > > +	return &ruip->rui_item;
> > > +}
> > > +
> > >  static const struct xfs_item_ops xfs_rui_item_ops = {
> > >  	.iop_size	= xfs_rui_item_size,
> > >  	.iop_format	= xfs_rui_item_format,
> > > @@ -590,6 +616,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
> > >  	.iop_release	= xfs_rui_item_release,
> > >  	.iop_recover	= xfs_rui_item_recover,
> > >  	.iop_match	= xfs_rui_item_match,
> > > +	.iop_relog	= xfs_rui_item_relog,
> > >  };
> > >  
> > >  /*
> > > diff --git a/fs/xfs/xfs_stats.c b/fs/xfs/xfs_stats.c
> > > index f70f1255220b..20e0534a772c 100644
> > > --- a/fs/xfs/xfs_stats.c
> > > +++ b/fs/xfs/xfs_stats.c
> > > @@ -23,6 +23,7 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
> > >  	uint64_t	xs_xstrat_bytes = 0;
> > >  	uint64_t	xs_write_bytes = 0;
> > >  	uint64_t	xs_read_bytes = 0;
> > > +	uint64_t	defer_relog = 0;
> > >  
> > >  	static const struct xstats_entry {
> > >  		char	*desc;
> > > @@ -70,10 +71,13 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
> > >  		xs_xstrat_bytes += per_cpu_ptr(stats, i)->s.xs_xstrat_bytes;
> > >  		xs_write_bytes += per_cpu_ptr(stats, i)->s.xs_write_bytes;
> > >  		xs_read_bytes += per_cpu_ptr(stats, i)->s.xs_read_bytes;
> > > +		defer_relog += per_cpu_ptr(stats, i)->s.defer_relog;
> > >  	}
> > >  
> > >  	len += scnprintf(buf + len, PATH_MAX-len, "xpc %Lu %Lu %Lu\n",
> > >  			xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
> > > +	len += scnprintf(buf + len, PATH_MAX-len, "defer_relog %llu\n",
> > > +			defer_relog);
> > >  	len += scnprintf(buf + len, PATH_MAX-len, "debug %u\n",
> > >  #if defined(DEBUG)
> > >  		1);
> > > diff --git a/fs/xfs/xfs_stats.h b/fs/xfs/xfs_stats.h
> > > index 34d704f703d2..43ffba74f045 100644
> > > --- a/fs/xfs/xfs_stats.h
> > > +++ b/fs/xfs/xfs_stats.h
> > > @@ -137,6 +137,7 @@ struct __xfsstats {
> > >  	uint64_t		xs_xstrat_bytes;
> > >  	uint64_t		xs_write_bytes;
> > >  	uint64_t		xs_read_bytes;
> > > +	uint64_t		defer_relog;
> > >  };
> > >  
> > >  #define	xfsstats_offset(f)	(offsetof(struct __xfsstats, f)/sizeof(uint32_t))
> > > diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
> > > index dcdcf99cfa5d..86951652d3ed 100644
> > > --- a/fs/xfs/xfs_trace.h
> > > +++ b/fs/xfs/xfs_trace.h
> > > @@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
> > >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
> > >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
> > >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
> > > +DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
> > >  
> > >  #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
> > >  DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
> > > diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
> > > index 186e77d08cc1..084658946cc8 100644
> > > --- a/fs/xfs/xfs_trans.h
> > > +++ b/fs/xfs/xfs_trans.h
> > > @@ -75,6 +75,8 @@ struct xfs_item_ops {
> > >  	int (*iop_recover)(struct xfs_log_item *lip,
> > >  			   struct list_head *capture_list);
> > >  	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
> > > +	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
> > > +			struct xfs_trans *tp);
> > >  };
> > >  
> > >  /* Is this log item a deferred action intent? */
> > > @@ -258,4 +260,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
> > >  
> > >  extern kmem_zone_t	*xfs_trans_zone;
> > >  
> > > +static inline struct xfs_log_item *
> > > +xfs_trans_item_relog(
> > > +	struct xfs_log_item	*lip,
> > > +	struct xfs_trans	*tp)
> > > +{
> > > +	return lip->li_ops->iop_relog(lip, tp);
> > > +}
> > > +
> > >  #endif	/* __XFS_TRANS_H__ */
> > > 
> > 
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished
  2020-10-05 18:20 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
@ 2020-10-05 18:21 ` Darrick J. Wong
  0 siblings, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2020-10-05 18:21 UTC (permalink / raw)
  To: darrick.wong; +Cc: Dave Chinner, Brian Foster, linux-xfs, david, bfoster, hch

From: Darrick J. Wong <darrick.wong@oracle.com>

The defer ops code has been finishing items in the wrong order -- if a
top level defer op creates items A and B, and finishing item A creates
more defer ops A1 and A2, we'll put the new items on the end of the
chain and process them in the order A B A1 A2.  This is kind of weird,
since it's convenient for programmers to be able to think of A and B as
an ordered sequence where all the sub-tasks for A must finish before we
move on to B, e.g. A A1 A2 D.

Right now, our log intent items are not so complex that this matters,
but this will become important for the atomic extent swapping patchset.
In order to maintain correct reference counting of extents, we have to
unmap and remap extents in that order, and we want to complete that work
before moving on to the next range that the user wants to swap.  This
patch fixes defer ops to satsify that requirement.

The primary symptom of the incorrect order was noticed in an early
performance analysis of the atomic extent swap code.  An astonishingly
large number of deferred work items accumulated when userspace requested
an atomic update of two very fragmented files.  The cause of this was
traced to the same ordering bug in the inner loop of
xfs_defer_finish_noroll.

If the ->finish_item method of a deferred operation queues new deferred
operations, those new deferred ops are appended to the tail of the
pending work list.  To illustrate, say that a caller creates a
transaction t0 with four deferred operations D0-D3.  The first thing
defer ops does is roll the transaction to t1, leaving us with:

t1: D0(t0), D1(t0), D2(t0), D3(t0)

Let's say that finishing each of D0-D3 will create two new deferred ops.
After finish D0 and roll, we'll have the following chain:

t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)

d4 and d5 were logged to t1.  Notice that while we're about to start
work on D1, we haven't actually completed all the work implied by D0
being finished.  So far we've been careful (or lucky) to structure the
dfops callers such that D1 doesn't depend on d4 or d5 being finished,
but this is a potential logic bomb.

There's a second problem lurking.  Let's see what happens as we finish
D1-D3:

t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2)
t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3)
t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)

Let's say that d4-d11 are simple work items that don't queue any other
operations, which means that we can complete each d4 and roll to t6:

t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
...
t11: d10(t4), d11(t4)
t12: d11(t4)
<done>

When we try to roll to transaction #12, we're holding defer op d11,
which we logged way back in t4.  This means that the tail of the log is
pinned at t4.  If the log is very small or there are a lot of other
threads updating metadata, this means that we might have wrapped the log
and cannot get roll to t11 because there isn't enough space left before
we'd run into t4.

Let's shift back to the original failure.  I mentioned before that I
discovered this flaw while developing the atomic file update code.  In
that scenario, we have a defer op (D0) that finds a range of file blocks
to remap, creates a handful of new defer ops to do that, and then asks
to be continued with however much work remains.

So, D0 is the original swapext deferred op.  The first thing defer ops
does is rolls to t1:

t1: D0(t0)

We try to finish D0, logging d1 and d2 in the process, but can't get all
the work done.  We log a done item and a new intent item for the work
that D0 still has to do, and roll to t2:

t2: D0'(t1), d1(t1), d2(t1)

We roll and try to finish D0', but still can't get all the work done, so
we log a done item and a new intent item for it, requeue D0 a second
time, and roll to t3:

t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)

If it takes 48 more rolls to complete D0, then we'll finally dispense
with D0 in t50:

t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)

We then try to roll again to get a chain like this:

t51: d1(t1), d2(t1), ..., d101(t50), d102(t50)
...
t152: d102(t50)
<done>

Notice that in rolling to transaction #51, we're holding on to a log
intent item for d1 that was logged in transaction #1.  This means that
the tail of the log is pinned at t1.  If the log is very small or there
are a lot of other threads updating metadata, this means that we might
have wrapped the log and cannot roll to t51 because there isn't enough
space left before we'd run into t1.  This is of course problem #2 again.

But notice the third problem with this scenario: we have 102 defer ops
tied to this transaction!  Each of these items are backed by pinned
kernel memory, which means that we risk OOM if the chains get too long.

Yikes.  Problem #1 is a subtle logic bomb that could hit someone in the
future; problem #2 applies (rarely) to the current upstream, and problem
#3 applies to work under development.

This is not how incremental deferred operations were supposed to work.
The dfops design of logging in the same transaction an intent-done item
and a new intent item for the work remaining was to make it so that we
only have to juggle enough deferred work items to finish that one small
piece of work.  Deferred log item recovery will find that first
unfinished work item and restart it, no matter how many other intent
items might follow it in the log.  Therefore, it's ok to put the new
intents at the start of the dfops chain.

For the first example, the chains look like this:

t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
t3: d5(t1), D1(t0), D2(t0), D3(t0)
...
t9: d9(t7), D3(t0)
t10: D3(t0)
t11: d10(t10), d11(t10)
t12: d11(t10)

For the second example, the chains look like this:

t1: D0(t0)
t2: d1(t1), d2(t1), D0'(t1)
t3: d2(t1), D0'(t1)
t4: D0'(t1)
t5: d1(t4), d2(t4), D0''(t4)
...
t148: D0<50 primes>(t147)
t149: d101(t148), d102(t148)
t150: d102(t148)
<done>

This actually sucks more for pinning the log tail (we try to roll to t10
while holding an intent item that was logged in t1) but we've solved
problem #1.  We've also reduced the maximum chain length from:

    sum(all the new items) + nr_original_items

to:

    max(new items that each original item creates) + nr_original_items

This solves problem #3 by sharply reducing the number of defer ops that
can be attached to a transaction at any given time.  The change makes
the problem of log tail pinning worse, but is improvement we need to
solve problem #2.  Actually solving #2, however, is left to the next
patch.

Note that a subsequent analysis of some hard-to-trigger reflink and COW
livelocks on extremely fragmented filesystems (or systems running a lot
of IO threads) showed the same symptoms -- uncomfortably large numbers
of incore deferred work items and occasional stalls in the transaction
grant code while waiting for log reservations.  I think this patch and
the next one will also solve these problems.

As originally written, the code used list_splice_tail_init instead of
list_splice_init, so change that, and leave a short comment explaining
our actions.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 4d2583758f3d..3aaf8f6f05a4 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -415,8 +415,17 @@ xfs_defer_finish_noroll(
 
 	/* Until we run out of pending work to finish... */
 	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
+		/*
+		 * Deferred items that are created in the process of finishing
+		 * other deferred work items should be queued at the head of
+		 * the pending list, which puts them ahead of the deferred work
+		 * that was created by the caller.  This keeps the number of
+		 * pending work items to a minimum, which decreases the amount
+		 * of time that any one intent item can stick around in memory,
+		 * pinning the log tail.
+		 */
 		xfs_defer_create_intents(*tp);
-		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
+		list_splice_init(&(*tp)->t_dfops, &dop_pending);
 
 		error = xfs_defer_trans_roll(tp);
 		if (error)


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished
  2020-09-27 23:41 [PATCH v4 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
@ 2020-09-27 23:42 ` Darrick J. Wong
  0 siblings, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2020-09-27 23:42 UTC (permalink / raw)
  To: darrick.wong; +Cc: Dave Chinner, Brian Foster, linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

The defer ops code has been finishing items in the wrong order -- if a
top level defer op creates items A and B, and finishing item A creates
more defer ops A1 and A2, we'll put the new items on the end of the
chain and process them in the order A B A1 A2.  This is kind of weird,
since it's convenient for programmers to be able to think of A and B as
an ordered sequence where all the sub-tasks for A must finish before we
move on to B, e.g. A A1 A2 D.

Right now, our log intent items are not so complex that this matters,
but this will become important for the atomic extent swapping patchset.
In order to maintain correct reference counting of extents, we have to
unmap and remap extents in that order, and we want to complete that work
before moving on to the next range that the user wants to swap.  This
patch fixes defer ops to satsify that requirement.

The primary symptom of the incorrect order was noticed in an early
performance analysis of the atomic extent swap code.  An astonishingly
large number of deferred work items accumulated when userspace requested
an atomic update of two very fragmented files.  The cause of this was
traced to the same ordering bug in the inner loop of
xfs_defer_finish_noroll.

If the ->finish_item method of a deferred operation queues new deferred
operations, those new deferred ops are appended to the tail of the
pending work list.  To illustrate, say that a caller creates a
transaction t0 with four deferred operations D0-D3.  The first thing
defer ops does is roll the transaction to t1, leaving us with:

t1: D0(t0), D1(t0), D2(t0), D3(t0)

Let's say that finishing each of D0-D3 will create two new deferred ops.
After finish D0 and roll, we'll have the following chain:

t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)

d4 and d5 were logged to t1.  Notice that while we're about to start
work on D1, we haven't actually completed all the work implied by D0
being finished.  So far we've been careful (or lucky) to structure the
dfops callers such that D1 doesn't depend on d4 or d5 being finished,
but this is a potential logic bomb.

There's a second problem lurking.  Let's see what happens as we finish
D1-D3:

t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2)
t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3)
t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)

Let's say that d4-d11 are simple work items that don't queue any other
operations, which means that we can complete each d4 and roll to t6:

t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
...
t11: d10(t4), d11(t4)
t12: d11(t4)
<done>

When we try to roll to transaction #12, we're holding defer op d11,
which we logged way back in t4.  This means that the tail of the log is
pinned at t4.  If the log is very small or there are a lot of other
threads updating metadata, this means that we might have wrapped the log
and cannot get roll to t11 because there isn't enough space left before
we'd run into t4.

Let's shift back to the original failure.  I mentioned before that I
discovered this flaw while developing the atomic file update code.  In
that scenario, we have a defer op (D0) that finds a range of file blocks
to remap, creates a handful of new defer ops to do that, and then asks
to be continued with however much work remains.

So, D0 is the original swapext deferred op.  The first thing defer ops
does is rolls to t1:

t1: D0(t0)

We try to finish D0, logging d1 and d2 in the process, but can't get all
the work done.  We log a done item and a new intent item for the work
that D0 still has to do, and roll to t2:

t2: D0'(t1), d1(t1), d2(t1)

We roll and try to finish D0', but still can't get all the work done, so
we log a done item and a new intent item for it, requeue D0 a second
time, and roll to t3:

t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)

If it takes 48 more rolls to complete D0, then we'll finally dispense
with D0 in t50:

t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)

We then try to roll again to get a chain like this:

t51: d1(t1), d2(t1), ..., d101(t50), d102(t50)
...
t152: d102(t50)
<done>

Notice that in rolling to transaction #51, we're holding on to a log
intent item for d1 that was logged in transaction #1.  This means that
the tail of the log is pinned at t1.  If the log is very small or there
are a lot of other threads updating metadata, this means that we might
have wrapped the log and cannot roll to t51 because there isn't enough
space left before we'd run into t1.  This is of course problem #2 again.

But notice the third problem with this scenario: we have 102 defer ops
tied to this transaction!  Each of these items are backed by pinned
kernel memory, which means that we risk OOM if the chains get too long.

Yikes.  Problem #1 is a subtle logic bomb that could hit someone in the
future; problem #2 applies (rarely) to the current upstream, and problem
#3 applies to work under development.

This is not how incremental deferred operations were supposed to work.
The dfops design of logging in the same transaction an intent-done item
and a new intent item for the work remaining was to make it so that we
only have to juggle enough deferred work items to finish that one small
piece of work.  Deferred log item recovery will find that first
unfinished work item and restart it, no matter how many other intent
items might follow it in the log.  Therefore, it's ok to put the new
intents at the start of the dfops chain.

For the first example, the chains look like this:

t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
t3: d5(t1), D1(t0), D2(t0), D3(t0)
...
t9: d9(t7), D3(t0)
t10: D3(t0)
t11: d10(t10), d11(t10)
t12: d11(t10)

For the second example, the chains look like this:

t1: D0(t0)
t2: d1(t1), d2(t1), D0'(t1)
t3: d2(t1), D0'(t1)
t4: D0'(t1)
t5: d1(t4), d2(t4), D0''(t4)
...
t148: D0<50 primes>(t147)
t149: d101(t148), d102(t148)
t150: d102(t148)
<done>

This actually sucks more for pinning the log tail (we try to roll to t10
while holding an intent item that was logged in t1) but we've solved
problem #1.  We've also reduced the maximum chain length from:

    sum(all the new items) + nr_original_items

to:

    max(new items that each original item creates) + nr_original_items

This solves problem #3 by sharply reducing the number of defer ops that
can be attached to a transaction at any given time.  The change makes
the problem of log tail pinning worse, but is improvement we need to
solve problem #2.  Actually solving #2, however, is left to the next
patch.

Note that a subsequent analysis of some hard-to-trigger reflink and COW
livelocks on extremely fragmented filesystems (or systems running a lot
of IO threads) showed the same symptoms -- uncomfortably large numbers
of incore deferred work items and occasional stalls in the transaction
grant code while waiting for log reservations.  I think this patch and
the next one will also solve these problems.

As originally written, the code used list_splice_tail_init instead of
list_splice_init, so change that, and leave a short comment explaining
our actions.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 5a9436d83833..8d42b914dcd4 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -415,8 +415,17 @@ xfs_defer_finish_noroll(
 
 	/* Until we run out of pending work to finish... */
 	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
+		/*
+		 * Deferred items that are created in the process of finishing
+		 * other deferred work items should be queued at the head of
+		 * the pending list, which puts them ahead of the deferred work
+		 * that was created by the caller.  This keeps the number of
+		 * pending work items to a minimum, which decreases the amount
+		 * of time that any one intent item can stick around in memory,
+		 * pinning the log tail.
+		 */
 		xfs_defer_create_intents(*tp);
-		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
+		list_splice_init(&(*tp)->t_dfops, &dop_pending);
 
 		error = xfs_defer_trans_roll(tp);
 		if (error)


^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-10-05 18:23 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-29 17:44 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
2020-09-29 17:44 ` [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
2020-09-29 17:44 ` [PATCH 2/4] xfs: periodically relog deferred intent items Darrick J. Wong
2020-10-01 16:02   ` Brian Foster
2020-10-01 17:22     ` Darrick J. Wong
2020-10-01 17:50       ` Brian Foster
2020-09-29 17:44 ` [PATCH 3/4] xfs: expose the log push threshold Darrick J. Wong
2020-09-29 17:44 ` [PATCH 4/4] xfs: only relog deferred intent items if free space in the log gets low Darrick J. Wong
2020-10-01 16:02   ` Brian Foster
2020-10-01 17:33     ` Darrick J. Wong
  -- strict thread matches above, loose matches on Subject: below --
2020-10-05 18:20 [PATCH v5 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
2020-10-05 18:21 ` [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
2020-09-27 23:41 [PATCH v4 0/4] xfs: fix some log stalling problems in defer ops Darrick J. Wong
2020-09-27 23:42 ` [PATCH 1/4] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.