All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] xfs: fix some log stalling problems in defer ops
@ 2020-09-17  3:29 Darrick J. Wong
  2020-09-17  3:29 ` [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-17  3:29 UTC (permalink / raw)
  To: darrick.wong; +Cc: linux-xfs, david, bfoster

Hi all,

This series tries to fix some structural problems in the defer ops code.
The defer ops code has been finishing items in the wrong order -- if a
top level defer op creates items A and B, and finishing item A creates
more defer ops A1 and A2, we'll put the new items on the end of the chain
and process them in the order A B A1 A2.  This is kind of weird, since
it's convenient for programmers to be able to think of A and B as an
ordered sequence where all the work for A must finish before we move on
to B, e.g. A A1 A2 D.

That isn't how the defer ops actually works, but so far we've been lucky
that this hasn't ever caused serious problems.  This /will/, however,
when we get to the atomic extent swap code, where for refcounting
purposes it actually /does/ matter that unmap and map child intents
execute in that order, and complete before we move on to the next extent
in the files.  This also causes a very long chain of intent items to
build up, which can exhaust memory.

We need to teach defer ops to finish all the sub-work associated with
each defer op that the caller gave us, to minimize the length of the
defer ops chains; and then we need to teach it to relog items
periodically to avoid pinning the log tail.

If you're going to start using this mess, you probably ought to just
pull from my git trees, which are linked below.

This is an extraordinary way to destroy everything.  Enjoy!
Comments and questions are, as always, welcome.

--D

kernel git tree:
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git/log/?h=defer-ops-stalls

xfsprogs git tree:
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=defer-ops-stalls
---
 fs/xfs/libxfs/xfs_defer.c  |   59 +++++++++++++++++++++++++++++++++++++++++++-
 fs/xfs/xfs_bmap_item.c     |   25 +++++++++++++++++++
 fs/xfs/xfs_extfree_item.c  |   29 ++++++++++++++++++++++
 fs/xfs/xfs_log.c           |   41 +++++++++++++++++++++++--------
 fs/xfs/xfs_log.h           |    2 +
 fs/xfs/xfs_refcount_item.c |   27 ++++++++++++++++++++
 fs/xfs/xfs_rmap_item.c     |   27 ++++++++++++++++++++
 fs/xfs/xfs_trace.h         |    1 +
 fs/xfs/xfs_trans.h         |   10 +++++++
 9 files changed, 210 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished
  2020-09-17  3:29 [PATCH 0/3] xfs: fix some log stalling problems in defer ops Darrick J. Wong
@ 2020-09-17  3:29 ` Darrick J. Wong
  2020-09-17  5:57   ` Dave Chinner
  2020-09-17 15:27   ` Brian Foster
  2020-09-17  3:30 ` [PATCH 2/3] xfs: periodically relog deferred intent items Darrick J. Wong
  2020-09-17  3:30 ` [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items Darrick J. Wong
  2 siblings, 2 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-17  3:29 UTC (permalink / raw)
  To: darrick.wong; +Cc: linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

The defer ops code has been finishing items in the wrong order -- if a
top level defer op creates items A and B, and finishing item A creates
more defer ops A1 and A2, we'll put the new items on the end of the
chain and process them in the order A B A1 A2.  This is kind of weird,
since it's convenient for programmers to be able to think of A and B as
an ordered sequence where all the sub-tasks for A must finish before we
move on to B, e.g. A A1 A2 D.

Right now, our log intent items are not so complex that this matters,
but this will become important for the atomic extent swapping patchset.
In order to maintain correct reference counting of extents, we have to
unmap and remap extents in that order, and we want to complete that work
before moving on to the next range that the user wants to swap.  This
patch fixes defer ops to satsify that requirement.

The primary symptom of the incorrect order was noticed in an early
performance analysis of the atomic extent swap code.  An astonishingly
large number of deferred work items accumulated when userspace requested
an atomic update of two very fragmented files.  The cause of this was
traced to the same ordering bug in the inner loop of
xfs_defer_finish_noroll.

If the ->finish_item method of a deferred operation queues new deferred
operations, those new deferred ops are appended to the tail of the
pending work list.  To illustrate, say that a caller creates a
transaction t0 with four deferred operations D0-D3.  The first thing
defer ops does is roll the transaction to t1, leaving us with:

t1: D0(t0), D1(t0), D2(t0), D3(t0)

Let's say that finishing each of D0-D3 will create two new deferred ops.
After finish D0 and roll, we'll have the following chain:

t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)

d4 and d5 were logged to t1.  Notice that while we're about to start
work on D1, we haven't actually completed all the work implied by D0
being finished.  So far we've been careful (or lucky) to structure the
dfops callers such that D1 doesn't depend on d4 or d5 being finished,
but this is a potential logic bomb.

There's a second problem lurking.  Let's see what happens as we finish
D1-D3:

t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2)
t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3)
t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)

Let's say that d4-d11 are simple work items that don't queue any other
operations, which means that we can complete each d4 and roll to t6:

t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
...
t11: d10(t4), d11(t4)
t12: d11(t4)
<done>

When we try to roll to transaction #12, we're holding defer op d11,
which we logged way back in t4.  This means that the tail of the log is
pinned at t4.  If the log is very small or there are a lot of other
threads updating metadata, this means that we might have wrapped the log
and cannot get roll to t11 because there isn't enough space left before
we'd run into t4.

Let's shift back to the original failure.  I mentioned before that I
discovered this flaw while developing the atomic file update code.  In
that scenario, we have a defer op (D0) that finds a range of file blocks
to remap, creates a handful of new defer ops to do that, and then asks
to be continued with however much work remains.

So, D0 is the original swapext deferred op.  The first thing defer ops
does is rolls to t1:

t1: D0(t0)

We try to finish D0, logging d1 and d2 in the process, but can't get all
the work done.  We log a done item and a new intent item for the work
that D0 still has to do, and roll to t2:

t2: D0'(t1), d1(t1), d2(t1)

We roll and try to finish D0', but still can't get all the work done, so
we log a done item and a new intent item for it, requeue D0 a second
time, and roll to t3:

t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)

If it takes 48 more rolls to complete D0, then we'll finally dispense
with D0 in t50:

t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)

We then try to roll again to get a chain like this:

t51: d1(t1), d2(t1), ..., d101(t50), d102(t50)
...
t152: d102(t50)
<done>

Notice that in rolling to transaction #51, we're holding on to a log
intent item for d1 that was logged in transaction #1.  This means that
the tail of the log is pinned at t1.  If the log is very small or there
are a lot of other threads updating metadata, this means that we might
have wrapped the log and cannot roll to t51 because there isn't enough
space left before we'd run into t1.  This is of course problem #2 again.

But notice the third problem with this scenario: we have 102 defer ops
tied to this transaction!  Each of these items are backed by pinned
kernel memory, which means that we risk OOM if the chains get too long.

Yikes.  Problem #1 is a subtle logic bomb that could hit someone in the
future; problem #2 applies (rarely) to the current upstream, and problem
#3 applies to work under development.

This is not how incremental deferred operations were supposed to work.
The dfops design of logging in the same transaction an intent-done item
and a new intent item for the work remaining was to make it so that we
only have to juggle enough deferred work items to finish that one small
piece of work.  Deferred log item recovery will find that first
unfinished work item and restart it, no matter how many other intent
items might follow it in the log.  Therefore, it's ok to put the new
intents at the start of the dfops chain.

For the first example, the chains look like this:

t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
t3: d5(t1), D1(t0), D2(t0), D3(t0)
...
t9: d9(t7), D3(t0)
t10: D3(t0)
t11: d10(t10), d11(t10)
t12: d11(t10)

For the second example, the chains look like this:

t1: D0(t0)
t2: d1(t1), d2(t1), D0'(t1)
t3: d2(t1), D0'(t1)
t4: D0'(t1)
t5: d1(t4), d2(t4), D0''(t4)
...
t148: D0<50 primes>(t147)
t149: d101(t148), d102(t148)
t150: d102(t148)
<done>

This actually sucks more for pinning the log tail (we try to roll to t10
while holding an intent item that was logged in t1) but we've solved
problem #1.  We've also reduced the maximum chain length from:

    sum(all the new items) + nr_original_items

to:

    max(new items that each original item creates) + nr_original_items

This solves problem #3 by sharply reducing the number of defer ops that
can be attached to a transaction at any given time.  The change makes
the problem of log tail pinning worse, but is improvement we need to
solve problem #2.  Actually solving #2, however, is left to the next
patch.

Note that a subsequent analysis of some hard-to-trigger reflink and COW
livelocks on extremely fragmented filesystems (or systems running a lot
of IO threads) showed the same symptoms -- uncomfortably large numbers
of incore deferred work items and occasional stalls in the transaction
grant code while waiting for log reservations.  I think this patch and
the next one will also solve these problems.

As originally written, the code used list_splice_tail_init instead of
list_splice_init, so change that, and leave a short comment explaining
our actions.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 97523b394932..84a70edd0da1 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -431,8 +431,17 @@ xfs_defer_finish_noroll(
 
 	/* Until we run out of pending work to finish... */
 	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
+		/*
+		 * Deferred items that are created in the process of finishing
+		 * other deferred work items should be queued at the head of
+		 * the pending list, which puts them ahead of the deferred work
+		 * that was created by the caller.  This keeps the number of
+		 * pending work items to a minimum, which decreases the amount
+		 * of time that any one intent item can stick around in memory,
+		 * pinning the log tail.
+		 */
 		xfs_defer_create_intents(*tp);
-		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
+		list_splice_init(&(*tp)->t_dfops, &dop_pending);
 
 		error = xfs_defer_trans_roll(tp);
 		if (error)


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/3] xfs: periodically relog deferred intent items
  2020-09-17  3:29 [PATCH 0/3] xfs: fix some log stalling problems in defer ops Darrick J. Wong
  2020-09-17  3:29 ` [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
@ 2020-09-17  3:30 ` Darrick J. Wong
  2020-09-17  6:11   ` Dave Chinner
  2020-09-17 15:28   ` Brian Foster
  2020-09-17  3:30 ` [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items Darrick J. Wong
  2 siblings, 2 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-17  3:30 UTC (permalink / raw)
  To: darrick.wong; +Cc: linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

There's a subtle design flaw in the deferred log item code that can lead
to pinning the log tail.  Taking up the defer ops chain examples from
the previous commit, we can get trapped in sequences like this:

Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
chain will look like the following if the transaction rolls succeed:

t1: D0(t0), D1(t0), D2(t0), D3(t0)
t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
t3: d5(t1), D1(t0), D2(t0), D3(t0)
...
t9: d9(t7), D3(t0)
t10: D3(t0)
t11: d10(t10), d11(t10)
t12: d11(t10)

In transaction 9, we finish d9 and try to roll to t10 while holding onto
an intent item for D3 that we logged in t0.

The previous commit changed the order in which we place new defer ops in
the defer ops processing chain to reduce the maximum chain length.  Now
make xfs_defer_finish_noroll capable of relogging the entire chain
periodically so that we can always move the log tail forward.  We do
this every seven loops, having observed that while most chains never
exceed seven items in length, the rest go far over that and seem to
be involved in most of the stall problems.

Callers are now required to ensure that the transaction reservation is
large enough to handle logging done items and new intent items for the
maximum possible chain length.  Most callers are careful to keep the
chain lengths low, so the overhead should be minimal.

(Note that in the next patch we'll make it so that we only relog on
demand, since 7 is an arbitrary number that I used here to get the basic
mechanics working.)

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/libxfs/xfs_defer.c  |   30 ++++++++++++++++++++++++++++++
 fs/xfs/xfs_bmap_item.c     |   25 +++++++++++++++++++++++++
 fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
 fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
 fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
 fs/xfs/xfs_trace.h         |    1 +
 fs/xfs/xfs_trans.h         |   10 ++++++++++
 7 files changed, 149 insertions(+)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 84a70edd0da1..7938e4d3af90 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -361,6 +361,28 @@ xfs_defer_cancel_list(
 	}
 }
 
+/*
+ * Prevent a log intent item from pinning the tail of the log by logging a
+ * done item to release the intent item; and then log a new intent item.
+ * The caller should provide a fresh transaction and roll it after we're done.
+ */
+static int
+xfs_defer_relog(
+	struct xfs_trans		**tpp,
+	struct list_head		*dfops)
+{
+	struct xfs_defer_pending	*dfp;
+
+	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
+
+	list_for_each_entry(dfp, dfops, dfp_list) {
+		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
+		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
+	}
+
+	return xfs_defer_trans_roll(tpp);
+}
+
 /*
  * Log an intent-done item for the first pending intent, and finish the work
  * items.
@@ -422,6 +444,7 @@ xfs_defer_finish_noroll(
 	struct xfs_trans		**tp)
 {
 	struct xfs_defer_pending	*dfp;
+	unsigned int			nr_rolls = 0;
 	int				error = 0;
 	LIST_HEAD(dop_pending);
 
@@ -447,6 +470,13 @@ xfs_defer_finish_noroll(
 		if (error)
 			goto out_shutdown;
 
+		/* Every few rolls we relog all the intent items. */
+		if (!(++nr_rolls % 7)) {
+			error = xfs_defer_relog(tp, &dop_pending);
+			if (error)
+				goto out_shutdown;
+		}
+
 		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
 				       dfp_list);
 		error = xfs_defer_finish_one(*tp, dfp);
diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
index 8461285a9dd9..2d4c60776714 100644
--- a/fs/xfs/xfs_bmap_item.c
+++ b/fs/xfs/xfs_bmap_item.c
@@ -535,6 +535,30 @@ xfs_bui_item_match(
 	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_bui_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_bud_log_item		*budp;
+	struct xfs_bui_log_item		*buip;
+	struct xfs_map_extent		*extp;
+
+	extp = BUI_ITEM(intent)->bui_format.bui_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
+	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
+
+	buip = xfs_bui_init(tp->t_mountp);
+	memcpy(buip->bui_format.bui_extents, extp, sizeof(*extp));
+	atomic_set(&buip->bui_next_extent, 1);
+	xfs_trans_add_item(tp, &buip->bui_item);
+	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
+	return &buip->bui_item;
+}
+
 static const struct xfs_item_ops xfs_bui_item_ops = {
 	.iop_size	= xfs_bui_item_size,
 	.iop_format	= xfs_bui_item_format,
@@ -542,6 +566,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
 	.iop_release	= xfs_bui_item_release,
 	.iop_recover	= xfs_bui_item_recover,
 	.iop_match	= xfs_bui_item_match,
+	.iop_relog	= xfs_bui_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
index f544ec007c12..7c3eebfd9064 100644
--- a/fs/xfs/xfs_extfree_item.c
+++ b/fs/xfs/xfs_extfree_item.c
@@ -648,6 +648,34 @@ xfs_efi_item_match(
 	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_efi_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_efd_log_item		*efdp;
+	struct xfs_efi_log_item		*efip;
+	struct xfs_extent		*extp;
+	unsigned int			count;
+
+	count = EFI_ITEM(intent)->efi_format.efi_nextents;
+	extp = EFI_ITEM(intent)->efi_format.efi_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
+	efdp->efd_next_extent = count;
+	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
+	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
+
+	efip = xfs_efi_init(tp->t_mountp, count);
+	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
+	atomic_set(&efip->efi_next_extent, count);
+	xfs_trans_add_item(tp, &efip->efi_item);
+	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
+	return &efip->efi_item;
+}
+
 static const struct xfs_item_ops xfs_efi_item_ops = {
 	.iop_size	= xfs_efi_item_size,
 	.iop_format	= xfs_efi_item_format,
@@ -655,6 +683,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
 	.iop_release	= xfs_efi_item_release,
 	.iop_recover	= xfs_efi_item_recover,
 	.iop_match	= xfs_efi_item_match,
+	.iop_relog	= xfs_efi_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
index 30e579b5857d..9af7099e2a86 100644
--- a/fs/xfs/xfs_refcount_item.c
+++ b/fs/xfs/xfs_refcount_item.c
@@ -566,6 +566,32 @@ xfs_cui_item_match(
 	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_cui_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_cud_log_item		*cudp;
+	struct xfs_cui_log_item		*cuip;
+	struct xfs_phys_extent		*extp;
+	unsigned int			count;
+
+	count = CUI_ITEM(intent)->cui_format.cui_nextents;
+	extp = CUI_ITEM(intent)->cui_format.cui_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
+	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
+
+	cuip = xfs_cui_init(tp->t_mountp, count);
+	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
+	atomic_set(&cuip->cui_next_extent, count);
+	xfs_trans_add_item(tp, &cuip->cui_item);
+	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
+	return &cuip->cui_item;
+}
+
 static const struct xfs_item_ops xfs_cui_item_ops = {
 	.iop_size	= xfs_cui_item_size,
 	.iop_format	= xfs_cui_item_format,
@@ -573,6 +599,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
 	.iop_release	= xfs_cui_item_release,
 	.iop_recover	= xfs_cui_item_recover,
 	.iop_match	= xfs_cui_item_match,
+	.iop_relog	= xfs_cui_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
index 34dd6b02bdc8..535b0358e0aa 100644
--- a/fs/xfs/xfs_rmap_item.c
+++ b/fs/xfs/xfs_rmap_item.c
@@ -589,6 +589,32 @@ xfs_rui_item_match(
 	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
 }
 
+/* Relog an intent item to push the log tail forward. */
+static struct xfs_log_item *
+xfs_rui_item_relog(
+	struct xfs_log_item		*intent,
+	struct xfs_trans		*tp)
+{
+	struct xfs_rud_log_item		*rudp;
+	struct xfs_rui_log_item		*ruip;
+	struct xfs_map_extent		*extp;
+	unsigned int			count;
+
+	count = RUI_ITEM(intent)->rui_format.rui_nextents;
+	extp = RUI_ITEM(intent)->rui_format.rui_extents;
+
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
+	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
+
+	ruip = xfs_rui_init(tp->t_mountp, count);
+	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
+	atomic_set(&ruip->rui_next_extent, count);
+	xfs_trans_add_item(tp, &ruip->rui_item);
+	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
+	return &ruip->rui_item;
+}
+
 static const struct xfs_item_ops xfs_rui_item_ops = {
 	.iop_size	= xfs_rui_item_size,
 	.iop_format	= xfs_rui_item_format,
@@ -596,6 +622,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
 	.iop_release	= xfs_rui_item_release,
 	.iop_recover	= xfs_rui_item_recover,
 	.iop_match	= xfs_rui_item_match,
+	.iop_relog	= xfs_rui_item_relog,
 };
 
 /*
diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
index a3a35a2d8ed9..362c155be525 100644
--- a/fs/xfs/xfs_trace.h
+++ b/fs/xfs/xfs_trace.h
@@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
 DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
 DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
 DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
+DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
 
 #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
 DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
index 995c1513693c..e838e8327510 100644
--- a/fs/xfs/xfs_trans.h
+++ b/fs/xfs/xfs_trans.h
@@ -78,6 +78,8 @@ struct xfs_item_ops {
 	int (*iop_recover)(struct xfs_log_item *lip,
 			   struct xfs_defer_capture **dfcp);
 	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
+	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
+			struct xfs_trans *tp);
 };
 
 /*
@@ -239,4 +241,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
 
 extern kmem_zone_t	*xfs_trans_zone;
 
+static inline struct xfs_log_item *
+xfs_trans_item_relog(
+	struct xfs_log_item	*lip,
+	struct xfs_trans	*tp)
+{
+	return lip->li_ops->iop_relog(lip, tp);
+}
+
 #endif	/* __XFS_TRANS_H__ */


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items
  2020-09-17  3:29 [PATCH 0/3] xfs: fix some log stalling problems in defer ops Darrick J. Wong
  2020-09-17  3:29 ` [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
  2020-09-17  3:30 ` [PATCH 2/3] xfs: periodically relog deferred intent items Darrick J. Wong
@ 2020-09-17  3:30 ` Darrick J. Wong
  2020-09-17 15:28   ` Brian Foster
  2 siblings, 1 reply; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-17  3:30 UTC (permalink / raw)
  To: darrick.wong; +Cc: linux-xfs, david, bfoster

From: Darrick J. Wong <darrick.wong@oracle.com>

Now that we've landed a means for the defer ops manager to ask log items
to relog themselves to move the log tail forward, we can improve how we
decide when to relog so that we're not just using an arbitrary hardcoded
value.

The XFS log has "push threshold", which tells us how far we'd have to
move the log tail forward to keep 25% of the ondisk log space available.
We use this threshold to decide when to force defer ops chains to relog
themselves.  This avoids unnecessary relogging (which adds extra steps
to metadata updates) while helping us to avoid pinning the tail.

A better algorithm would be to relog only when we detect that the time
required to move the tail forward is greater than the time remaining
before all the log space gets used up, but letting the upper levels
drive the relogging means that it is difficult to coordinate relogging
the lowest LSN'd intents first.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/libxfs/xfs_defer.c |   32 +++++++++++++++++++++++++-------
 fs/xfs/xfs_log.c          |   41 +++++++++++++++++++++++++++++++----------
 fs/xfs/xfs_log.h          |    2 ++
 3 files changed, 58 insertions(+), 17 deletions(-)


diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
index 7938e4d3af90..97ec36f32a0a 100644
--- a/fs/xfs/libxfs/xfs_defer.c
+++ b/fs/xfs/libxfs/xfs_defer.c
@@ -17,6 +17,7 @@
 #include "xfs_inode_item.h"
 #include "xfs_trace.h"
 #include "xfs_icache.h"
+#include "xfs_log.h"
 
 /*
  * Deferred Operations in XFS
@@ -372,15 +373,35 @@ xfs_defer_relog(
 	struct list_head		*dfops)
 {
 	struct xfs_defer_pending	*dfp;
+	xfs_lsn_t			threshold_lsn;
 
 	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
 
+	/*
+	 * Figure out where we need the tail to be in order to maintain the
+	 * minimum required free space in the log.
+	 */
+	threshold_lsn = xlog_grant_push_threshold((*tpp)->t_mountp->m_log, 0);
+	if (threshold_lsn == NULLCOMMITLSN)
+		return 0;
+
 	list_for_each_entry(dfp, dfops, dfp_list) {
+		/*
+		 * If the log intent item for this deferred op is behind the
+		 * threshold, we're running out of space and need to relog it
+		 * to release the tail.
+		 */
+		if (dfp->dfp_intent == NULL ||
+		    XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) < 0)
+			continue;
+
 		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
 		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
 	}
 
-	return xfs_defer_trans_roll(tpp);
+	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
+		return xfs_defer_trans_roll(tpp);
+	return 0;
 }
 
 /*
@@ -444,7 +465,6 @@ xfs_defer_finish_noroll(
 	struct xfs_trans		**tp)
 {
 	struct xfs_defer_pending	*dfp;
-	unsigned int			nr_rolls = 0;
 	int				error = 0;
 	LIST_HEAD(dop_pending);
 
@@ -471,11 +491,9 @@ xfs_defer_finish_noroll(
 			goto out_shutdown;
 
 		/* Every few rolls we relog all the intent items. */
-		if (!(++nr_rolls % 7)) {
-			error = xfs_defer_relog(tp, &dop_pending);
-			if (error)
-				goto out_shutdown;
-		}
+		error = xfs_defer_relog(tp, &dop_pending);
+		if (error)
+			goto out_shutdown;
 
 		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
 				       dfp_list);
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index ad0c69ee8947..62c9e0aaa7df 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -1475,14 +1475,15 @@ xlog_commit_record(
 }
 
 /*
- * Push on the buffer cache code if we ever use more than 75% of the on-disk
- * log space.  This code pushes on the lsn which would supposedly free up
- * the 25% which we want to leave free.  We may need to adopt a policy which
- * pushes on an lsn which is further along in the log once we reach the high
- * water mark.  In this manner, we would be creating a low water mark.
+ * Compute the LSN push target needed to push on the buffer cache code if we
+ * ever use more than 75% of the on-disk log space.  This code pushes on the
+ * lsn which would supposedly free up the 25% which we want to leave free.  We
+ * may need to adopt a policy which pushes on an lsn which is further along in
+ * the log once we reach the high water mark.  In this manner, we would be
+ * creating a low water mark.
  */
-STATIC void
-xlog_grant_push_ail(
+xfs_lsn_t
+xlog_grant_push_threshold(
 	struct xlog	*log,
 	int		need_bytes)
 {
@@ -1508,7 +1509,7 @@ xlog_grant_push_ail(
 	free_threshold = max(free_threshold, (log->l_logBBsize >> 2));
 	free_threshold = max(free_threshold, 256);
 	if (free_blocks >= free_threshold)
-		return;
+		return NULLCOMMITLSN;
 
 	xlog_crack_atomic_lsn(&log->l_tail_lsn, &threshold_cycle,
 						&threshold_block);
@@ -1528,13 +1529,33 @@ xlog_grant_push_ail(
 	if (XFS_LSN_CMP(threshold_lsn, last_sync_lsn) > 0)
 		threshold_lsn = last_sync_lsn;
 
+	return threshold_lsn;
+}
+
+/*
+ * Push on the buffer cache code if we ever use more than 75% of the on-disk
+ * log space.  This code pushes on the lsn which would supposedly free up
+ * the 25% which we want to leave free.  We may need to adopt a policy which
+ * pushes on an lsn which is further along in the log once we reach the high
+ * water mark.  In this manner, we would be creating a low water mark.
+ */
+STATIC void
+xlog_grant_push_ail(
+	struct xlog	*log,
+	int		need_bytes)
+{
+	xfs_lsn_t	threshold_lsn;
+
+	threshold_lsn = xlog_grant_push_threshold(log, need_bytes);
+	if (threshold_lsn == NULLCOMMITLSN || XLOG_FORCED_SHUTDOWN(log))
+		return;
+
 	/*
 	 * Get the transaction layer to kick the dirty buffers out to
 	 * disk asynchronously. No point in trying to do this if
 	 * the filesystem is shutting down.
 	 */
-	if (!XLOG_FORCED_SHUTDOWN(log))
-		xfs_ail_push(log->l_ailp, threshold_lsn);
+	xfs_ail_push(log->l_ailp, threshold_lsn);
 }
 
 /*
diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h
index 1412d6993f1e..58c3fcbec94a 100644
--- a/fs/xfs/xfs_log.h
+++ b/fs/xfs/xfs_log.h
@@ -141,4 +141,6 @@ void	xfs_log_quiesce(struct xfs_mount *mp);
 bool	xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t);
 bool	xfs_log_in_recovery(struct xfs_mount *);
 
+xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes);
+
 #endif	/* __XFS_LOG_H__ */


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished
  2020-09-17  3:29 ` [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
@ 2020-09-17  5:57   ` Dave Chinner
  2020-09-17 15:27   ` Brian Foster
  1 sibling, 0 replies; 13+ messages in thread
From: Dave Chinner @ 2020-09-17  5:57 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, bfoster

On Wed, Sep 16, 2020 at 08:29:53PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> The defer ops code has been finishing items in the wrong order -- if a

<snip long explanation>

Yeah, I'd kinda come to the same conclusion while reviewing the
recovery process. The analogy I made in my mind was the difference
in overhead of tracking a breadth-first tree walk vs a depth-first
tree walk...

> As originally written, the code used list_splice_tail_init instead of
> list_splice_init, so change that, and leave a short comment explaining
> our actions.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 97523b394932..84a70edd0da1 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -431,8 +431,17 @@ xfs_defer_finish_noroll(
>  
>  	/* Until we run out of pending work to finish... */
>  	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
> +		/*
> +		 * Deferred items that are created in the process of finishing
> +		 * other deferred work items should be queued at the head of
> +		 * the pending list, which puts them ahead of the deferred work
> +		 * that was created by the caller.  This keeps the number of
> +		 * pending work items to a minimum, which decreases the amount
> +		 * of time that any one intent item can stick around in memory,
> +		 * pinning the log tail.
> +		 */
>  		xfs_defer_create_intents(*tp);
> -		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
> +		list_splice_init(&(*tp)->t_dfops, &dop_pending);

*nod*.

My favourite sort of fix - a couple of hundred lines of explanation
for a one-liner :)

Reviewed-by: Dave Chinner <dchinner@redhat.com>

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] xfs: periodically relog deferred intent items
  2020-09-17  3:30 ` [PATCH 2/3] xfs: periodically relog deferred intent items Darrick J. Wong
@ 2020-09-17  6:11   ` Dave Chinner
  2020-09-17  7:18     ` Darrick J. Wong
  2020-09-17 15:28   ` Brian Foster
  1 sibling, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2020-09-17  6:11 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, bfoster

On Wed, Sep 16, 2020 at 08:30:00PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> There's a subtle design flaw in the deferred log item code that can lead
> to pinning the log tail.  Taking up the defer ops chain examples from
> the previous commit, we can get trapped in sequences like this:
> 
> Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> chain will look like the following if the transaction rolls succeed:
> 
> t1: D0(t0), D1(t0), D2(t0), D3(t0)
> t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> t3: d5(t1), D1(t0), D2(t0), D3(t0)
> ...
> t9: d9(t7), D3(t0)
> t10: D3(t0)
> t11: d10(t10), d11(t10)
> t12: d11(t10)
> 
> In transaction 9, we finish d9 and try to roll to t10 while holding onto
> an intent item for D3 that we logged in t0.
> 
> The previous commit changed the order in which we place new defer ops in
> the defer ops processing chain to reduce the maximum chain length.  Now
> make xfs_defer_finish_noroll capable of relogging the entire chain
> periodically so that we can always move the log tail forward.  We do
> this every seven loops, having observed that while most chains never
> exceed seven items in length, the rest go far over that and seem to
> be involved in most of the stall problems.
> 
> Callers are now required to ensure that the transaction reservation is
> large enough to handle logging done items and new intent items for the
> maximum possible chain length.  Most callers are careful to keep the
> chain lengths low, so the overhead should be minimal.
> 
> (Note that in the next patch we'll make it so that we only relog on
> demand, since 7 is an arbitrary number that I used here to get the basic
> mechanics working.)
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_defer.c  |   30 ++++++++++++++++++++++++++++++
>  fs/xfs/xfs_bmap_item.c     |   25 +++++++++++++++++++++++++
>  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
>  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_trace.h         |    1 +
>  fs/xfs/xfs_trans.h         |   10 ++++++++++
>  7 files changed, 149 insertions(+)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 84a70edd0da1..7938e4d3af90 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -361,6 +361,28 @@ xfs_defer_cancel_list(
>  	}
>  }
>  
> +/*
> + * Prevent a log intent item from pinning the tail of the log by logging a
> + * done item to release the intent item; and then log a new intent item.
> + * The caller should provide a fresh transaction and roll it after we're done.
> + */
> +static int
> +xfs_defer_relog(
> +	struct xfs_trans		**tpp,
> +	struct list_head		*dfops)
> +{
> +	struct xfs_defer_pending	*dfp;
> +
> +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> +
> +	list_for_each_entry(dfp, dfops, dfp_list) {
> +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);

Any reason for xfs_trans_item_relog() when it's a one liner?

> +	}
> +
> +	return xfs_defer_trans_roll(tpp);
> +}
> +
>  /*
>   * Log an intent-done item for the first pending intent, and finish the work
>   * items.
> @@ -422,6 +444,7 @@ xfs_defer_finish_noroll(
>  	struct xfs_trans		**tp)
>  {
>  	struct xfs_defer_pending	*dfp;
> +	unsigned int			nr_rolls = 0;
>  	int				error = 0;
>  	LIST_HEAD(dop_pending);
>  
> @@ -447,6 +470,13 @@ xfs_defer_finish_noroll(
>  		if (error)
>  			goto out_shutdown;
>  
> +		/* Every few rolls we relog all the intent items. */
> +		if (!(++nr_rolls % 7)) {
> +			error = xfs_defer_relog(tp, &dop_pending);
> +			if (error)
> +				goto out_shutdown;
> +		}

Urk.

I think I've got a better idea: rather than a counter, use something
meaningful as to whether the intent has been committed or not. e.g.
use something like xfs_log_item_in_current_chkpt() to determine if
we need to relog the intent.

i.e. If the intent is active in the CIL, then we don't need to relog
it. If the intent has been committed to the journal and is no longer
in the CIL list, relog it so the next CIL push will move it forward
in the journal.

The intent relogging functions look fine, though.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] xfs: periodically relog deferred intent items
  2020-09-17  6:11   ` Dave Chinner
@ 2020-09-17  7:18     ` Darrick J. Wong
  0 siblings, 0 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-17  7:18 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs, bfoster

On Thu, Sep 17, 2020 at 04:11:48PM +1000, Dave Chinner wrote:
> On Wed, Sep 16, 2020 at 08:30:00PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > There's a subtle design flaw in the deferred log item code that can lead
> > to pinning the log tail.  Taking up the defer ops chain examples from
> > the previous commit, we can get trapped in sequences like this:
> > 
> > Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> > chain will look like the following if the transaction rolls succeed:
> > 
> > t1: D0(t0), D1(t0), D2(t0), D3(t0)
> > t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> > t3: d5(t1), D1(t0), D2(t0), D3(t0)
> > ...
> > t9: d9(t7), D3(t0)
> > t10: D3(t0)
> > t11: d10(t10), d11(t10)
> > t12: d11(t10)
> > 
> > In transaction 9, we finish d9 and try to roll to t10 while holding onto
> > an intent item for D3 that we logged in t0.
> > 
> > The previous commit changed the order in which we place new defer ops in
> > the defer ops processing chain to reduce the maximum chain length.  Now
> > make xfs_defer_finish_noroll capable of relogging the entire chain
> > periodically so that we can always move the log tail forward.  We do
> > this every seven loops, having observed that while most chains never
> > exceed seven items in length, the rest go far over that and seem to
> > be involved in most of the stall problems.
> > 
> > Callers are now required to ensure that the transaction reservation is
> > large enough to handle logging done items and new intent items for the
> > maximum possible chain length.  Most callers are careful to keep the
> > chain lengths low, so the overhead should be minimal.
> > 
> > (Note that in the next patch we'll make it so that we only relog on
> > demand, since 7 is an arbitrary number that I used here to get the basic
> > mechanics working.)
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/libxfs/xfs_defer.c  |   30 ++++++++++++++++++++++++++++++
> >  fs/xfs/xfs_bmap_item.c     |   25 +++++++++++++++++++++++++
> >  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
> >  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_trace.h         |    1 +
> >  fs/xfs/xfs_trans.h         |   10 ++++++++++
> >  7 files changed, 149 insertions(+)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > index 84a70edd0da1..7938e4d3af90 100644
> > --- a/fs/xfs/libxfs/xfs_defer.c
> > +++ b/fs/xfs/libxfs/xfs_defer.c
> > @@ -361,6 +361,28 @@ xfs_defer_cancel_list(
> >  	}
> >  }
> >  
> > +/*
> > + * Prevent a log intent item from pinning the tail of the log by logging a
> > + * done item to release the intent item; and then log a new intent item.
> > + * The caller should provide a fresh transaction and roll it after we're done.
> > + */
> > +static int
> > +xfs_defer_relog(
> > +	struct xfs_trans		**tpp,
> > +	struct list_head		*dfops)
> > +{
> > +	struct xfs_defer_pending	*dfp;
> > +
> > +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> > +
> > +	list_for_each_entry(dfp, dfops, dfp_list) {
> > +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> > +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> 
> Any reason for xfs_trans_item_relog() when it's a one liner?

There aren't log intent items in userspace, so xfs_trans_item_relog
becomes a NOP macro in the xfsprogs port.

> > +	}
> > +
> > +	return xfs_defer_trans_roll(tpp);
> > +}
> > +
> >  /*
> >   * Log an intent-done item for the first pending intent, and finish the work
> >   * items.
> > @@ -422,6 +444,7 @@ xfs_defer_finish_noroll(
> >  	struct xfs_trans		**tp)
> >  {
> >  	struct xfs_defer_pending	*dfp;
> > +	unsigned int			nr_rolls = 0;
> >  	int				error = 0;
> >  	LIST_HEAD(dop_pending);
> >  
> > @@ -447,6 +470,13 @@ xfs_defer_finish_noroll(
> >  		if (error)
> >  			goto out_shutdown;
> >  
> > +		/* Every few rolls we relog all the intent items. */
> > +		if (!(++nr_rolls % 7)) {
> > +			error = xfs_defer_relog(tp, &dop_pending);
> > +			if (error)
> > +				goto out_shutdown;
> > +		}
> 
> Urk.
> 
> I think I've got a better idea: rather than a counter, use something
> meaningful as to whether the intent has been committed or not. e.g.
> use something like xfs_log_item_in_current_chkpt() to determine if
> we need to relog the intent.

I'll take a look at that in the morning.

> i.e. If the intent is active in the CIL, then we don't need to relog
> it. If the intent has been committed to the journal and is no longer
> in the CIL list, relog it so the next CIL push will move it forward
> in the journal.
> 
> The intent relogging functions look fine, though.

Thanks for digging through some of these. :)

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished
  2020-09-17  3:29 ` [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
  2020-09-17  5:57   ` Dave Chinner
@ 2020-09-17 15:27   ` Brian Foster
  2020-09-17 16:38     ` Darrick J. Wong
  1 sibling, 1 reply; 13+ messages in thread
From: Brian Foster @ 2020-09-17 15:27 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, david

On Wed, Sep 16, 2020 at 08:29:53PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> The defer ops code has been finishing items in the wrong order -- if a
> top level defer op creates items A and B, and finishing item A creates
> more defer ops A1 and A2, we'll put the new items on the end of the
> chain and process them in the order A B A1 A2.  This is kind of weird,
> since it's convenient for programmers to be able to think of A and B as
> an ordered sequence where all the sub-tasks for A must finish before we
> move on to B, e.g. A A1 A2 D.
> 
> Right now, our log intent items are not so complex that this matters,
> but this will become important for the atomic extent swapping patchset.
> In order to maintain correct reference counting of extents, we have to
> unmap and remap extents in that order, and we want to complete that work
> before moving on to the next range that the user wants to swap.  This
> patch fixes defer ops to satsify that requirement.
> 
> The primary symptom of the incorrect order was noticed in an early
> performance analysis of the atomic extent swap code.  An astonishingly
> large number of deferred work items accumulated when userspace requested
> an atomic update of two very fragmented files.  The cause of this was
> traced to the same ordering bug in the inner loop of
> xfs_defer_finish_noroll.
> 
> If the ->finish_item method of a deferred operation queues new deferred
> operations, those new deferred ops are appended to the tail of the
> pending work list.  To illustrate, say that a caller creates a
> transaction t0 with four deferred operations D0-D3.  The first thing
> defer ops does is roll the transaction to t1, leaving us with:
> 
> t1: D0(t0), D1(t0), D2(t0), D3(t0)
> 
> Let's say that finishing each of D0-D3 will create two new deferred ops.
> After finish D0 and roll, we'll have the following chain:
> 
> t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)
> 
> d4 and d5 were logged to t1.  Notice that while we're about to start
> work on D1, we haven't actually completed all the work implied by D0
> being finished.  So far we've been careful (or lucky) to structure the
> dfops callers such that D1 doesn't depend on d4 or d5 being finished,
> but this is a potential logic bomb.
> 
> There's a second problem lurking.  Let's see what happens as we finish
> D1-D3:
> 
> t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2)
> t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3)
> t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
> 
> Let's say that d4-d11 are simple work items that don't queue any other
> operations, which means that we can complete each d4 and roll to t6:
> 
> t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
> t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
> ...
> t11: d10(t4), d11(t4)
> t12: d11(t4)
> <done>
> 
> When we try to roll to transaction #12, we're holding defer op d11,
> which we logged way back in t4.  This means that the tail of the log is
> pinned at t4.  If the log is very small or there are a lot of other
> threads updating metadata, this means that we might have wrapped the log
> and cannot get roll to t11 because there isn't enough space left before
> we'd run into t4.
> 
> Let's shift back to the original failure.  I mentioned before that I
> discovered this flaw while developing the atomic file update code.  In
> that scenario, we have a defer op (D0) that finds a range of file blocks
> to remap, creates a handful of new defer ops to do that, and then asks
> to be continued with however much work remains.
> 
> So, D0 is the original swapext deferred op.  The first thing defer ops
> does is rolls to t1:
> 
> t1: D0(t0)
> 
> We try to finish D0, logging d1 and d2 in the process, but can't get all
> the work done.  We log a done item and a new intent item for the work
> that D0 still has to do, and roll to t2:
> 
> t2: D0'(t1), d1(t1), d2(t1)
> 
> We roll and try to finish D0', but still can't get all the work done, so
> we log a done item and a new intent item for it, requeue D0 a second
> time, and roll to t3:
> 
> t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)
> 
> If it takes 48 more rolls to complete D0, then we'll finally dispense
> with D0 in t50:
> 
> t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)
> 
> We then try to roll again to get a chain like this:
> 
> t51: d1(t1), d2(t1), ..., d101(t50), d102(t50)
> ...
> t152: d102(t50)
> <done>
> 
> Notice that in rolling to transaction #51, we're holding on to a log
> intent item for d1 that was logged in transaction #1.  This means that
> the tail of the log is pinned at t1.  If the log is very small or there
> are a lot of other threads updating metadata, this means that we might
> have wrapped the log and cannot roll to t51 because there isn't enough
> space left before we'd run into t1.  This is of course problem #2 again.
> 
> But notice the third problem with this scenario: we have 102 defer ops
> tied to this transaction!  Each of these items are backed by pinned
> kernel memory, which means that we risk OOM if the chains get too long.
> 
> Yikes.  Problem #1 is a subtle logic bomb that could hit someone in the
> future; problem #2 applies (rarely) to the current upstream, and problem
> #3 applies to work under development.
> 
> This is not how incremental deferred operations were supposed to work.
> The dfops design of logging in the same transaction an intent-done item
> and a new intent item for the work remaining was to make it so that we
> only have to juggle enough deferred work items to finish that one small
> piece of work.  Deferred log item recovery will find that first
> unfinished work item and restart it, no matter how many other intent
> items might follow it in the log.  Therefore, it's ok to put the new
> intents at the start of the dfops chain.
> 
> For the first example, the chains look like this:
> 
> t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> t3: d5(t1), D1(t0), D2(t0), D3(t0)
> ...
> t9: d9(t7), D3(t0)
> t10: D3(t0)
> t11: d10(t10), d11(t10)
> t12: d11(t10)
> 
> For the second example, the chains look like this:
> 
> t1: D0(t0)
> t2: d1(t1), d2(t1), D0'(t1)
> t3: d2(t1), D0'(t1)
> t4: D0'(t1)
> t5: d1(t4), d2(t4), D0''(t4)
> ...
> t148: D0<50 primes>(t147)
> t149: d101(t148), d102(t148)
> t150: d102(t148)
> <done>
> 

Ok, this one looks funny at first glance, but I think it makes sense..
If I use a simple example and assume D0(t0) is a large unmap intent and
the subsequent child dfops (d1, d2) are extent frees, this essentially
means that we'd unmap an (incomplete) range, defer the frees for the
range we did unmap, process those frees, then get back around to the
unmap and pick up the next chunk for D0' (and so on). That would be
instead of the current behavior which presumably process the unmap (D0)
continuously and queues up all of the frees (dN) until the former is
100% complete. Right?

> This actually sucks more for pinning the log tail (we try to roll to t10
> while holding an intent item that was logged in t1) but we've solved
> problem #1.  We've also reduced the maximum chain length from:
> 
>     sum(all the new items) + nr_original_items
> 
> to:
> 
>     max(new items that each original item creates) + nr_original_items
> 
> This solves problem #3 by sharply reducing the number of defer ops that
> can be attached to a transaction at any given time.  The change makes
> the problem of log tail pinning worse, but is improvement we need to
> solve problem #2.  Actually solving #2, however, is left to the next
> patch.
> 
> Note that a subsequent analysis of some hard-to-trigger reflink and COW
> livelocks on extremely fragmented filesystems (or systems running a lot
> of IO threads) showed the same symptoms -- uncomfortably large numbers
> of incore deferred work items and occasional stalls in the transaction
> grant code while waiting for log reservations.  I think this patch and
> the next one will also solve these problems.
> 
> As originally written, the code used list_splice_tail_init instead of
> list_splice_init, so change that, and leave a short comment explaining
> our actions.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 97523b394932..84a70edd0da1 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -431,8 +431,17 @@ xfs_defer_finish_noroll(
>  
>  	/* Until we run out of pending work to finish... */
>  	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
> +		/*
> +		 * Deferred items that are created in the process of finishing
> +		 * other deferred work items should be queued at the head of
> +		 * the pending list, which puts them ahead of the deferred work
> +		 * that was created by the caller.  This keeps the number of
> +		 * pending work items to a minimum, which decreases the amount
> +		 * of time that any one intent item can stick around in memory,
> +		 * pinning the log tail.
> +		 */
>  		xfs_defer_create_intents(*tp);
> -		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
> +		list_splice_init(&(*tp)->t_dfops, &dop_pending);

This has me wondering whether we should consider explicit separation of
chains through some kind of parent/child relationship to make the
behavior and implementation more clear to reason about. That's a bigger
picture topic, though. Assuming I'm following things correctly above,
this looks reasonable to me:

Reviewed-by: Brian Foster <bfoster@redhat.com>

>  
>  		error = xfs_defer_trans_roll(tp);
>  		if (error)
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] xfs: periodically relog deferred intent items
  2020-09-17  3:30 ` [PATCH 2/3] xfs: periodically relog deferred intent items Darrick J. Wong
  2020-09-17  6:11   ` Dave Chinner
@ 2020-09-17 15:28   ` Brian Foster
  2020-09-18  0:36     ` Darrick J. Wong
  1 sibling, 1 reply; 13+ messages in thread
From: Brian Foster @ 2020-09-17 15:28 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, david

On Wed, Sep 16, 2020 at 08:30:00PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> There's a subtle design flaw in the deferred log item code that can lead
> to pinning the log tail.  Taking up the defer ops chain examples from
> the previous commit, we can get trapped in sequences like this:
> 
> Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> chain will look like the following if the transaction rolls succeed:
> 
> t1: D0(t0), D1(t0), D2(t0), D3(t0)
> t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> t3: d5(t1), D1(t0), D2(t0), D3(t0)
> ...
> t9: d9(t7), D3(t0)
> t10: D3(t0)
> t11: d10(t10), d11(t10)
> t12: d11(t10)
> 
> In transaction 9, we finish d9 and try to roll to t10 while holding onto
> an intent item for D3 that we logged in t0.
> 
> The previous commit changed the order in which we place new defer ops in
> the defer ops processing chain to reduce the maximum chain length.  Now
> make xfs_defer_finish_noroll capable of relogging the entire chain
> periodically so that we can always move the log tail forward.  We do
> this every seven loops, having observed that while most chains never
> exceed seven items in length, the rest go far over that and seem to
> be involved in most of the stall problems.
> 
> Callers are now required to ensure that the transaction reservation is
> large enough to handle logging done items and new intent items for the
> maximum possible chain length.  Most callers are careful to keep the
> chain lengths low, so the overhead should be minimal.
> 
> (Note that in the next patch we'll make it so that we only relog on
> demand, since 7 is an arbitrary number that I used here to get the basic
> mechanics working.)
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_defer.c  |   30 ++++++++++++++++++++++++++++++
>  fs/xfs/xfs_bmap_item.c     |   25 +++++++++++++++++++++++++
>  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
>  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
>  fs/xfs/xfs_trace.h         |    1 +
>  fs/xfs/xfs_trans.h         |   10 ++++++++++
>  7 files changed, 149 insertions(+)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 84a70edd0da1..7938e4d3af90 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -361,6 +361,28 @@ xfs_defer_cancel_list(
>  	}
>  }
>  
> +/*
> + * Prevent a log intent item from pinning the tail of the log by logging a
> + * done item to release the intent item; and then log a new intent item.
> + * The caller should provide a fresh transaction and roll it after we're done.
> + */
> +static int
> +xfs_defer_relog(
> +	struct xfs_trans		**tpp,
> +	struct list_head		*dfops)
> +{
> +	struct xfs_defer_pending	*dfp;
> +
> +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> +
> +	list_for_each_entry(dfp, dfops, dfp_list) {
> +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> +	}
> +
> +	return xfs_defer_trans_roll(tpp);
> +}
> +
>  /*
>   * Log an intent-done item for the first pending intent, and finish the work
>   * items.
> @@ -422,6 +444,7 @@ xfs_defer_finish_noroll(
>  	struct xfs_trans		**tp)
>  {
>  	struct xfs_defer_pending	*dfp;
> +	unsigned int			nr_rolls = 0;
>  	int				error = 0;
>  	LIST_HEAD(dop_pending);
>  
> @@ -447,6 +470,13 @@ xfs_defer_finish_noroll(
>  		if (error)
>  			goto out_shutdown;
>  
> +		/* Every few rolls we relog all the intent items. */
> +		if (!(++nr_rolls % 7)) {
> +			error = xfs_defer_relog(tp, &dop_pending);
> +			if (error)
> +				goto out_shutdown;
> +		}
> +

I thought the purpose of the push threshold detection was to avoid this
hardcoded relog logic..? On a quick look, I see the next patch
immediately replaces this. That should probably be refactored such that
the proper code is introduced from the start...

>  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
>  				       dfp_list);
>  		error = xfs_defer_finish_one(*tp, dfp);
> diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
> index 8461285a9dd9..2d4c60776714 100644
> --- a/fs/xfs/xfs_bmap_item.c
> +++ b/fs/xfs/xfs_bmap_item.c
> @@ -535,6 +535,30 @@ xfs_bui_item_match(
>  	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_bui_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_bud_log_item		*budp;
> +	struct xfs_bui_log_item		*buip;
> +	struct xfs_map_extent		*extp;
> +
> +	extp = BUI_ITEM(intent)->bui_format.bui_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
> +	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
> +
> +	buip = xfs_bui_init(tp->t_mountp);
> +	memcpy(buip->bui_format.bui_extents, extp, sizeof(*extp));
> +	atomic_set(&buip->bui_next_extent, 1);

I was a little confused by the hardcoded value, then I saw that the max
value is one. Since we have the extent array and ->bui_next_extent
fields, perhaps this would be more robust if coded to copy those such
that it wouldn't break if the max value changed..? E.g., memcpy() the maps *
bui_nextents and copy the next_extent index directly.

(Looking further, I see that the other intent relog handlers follow that
pattern presumably because they allow a count > 1. So IOW, I'd just
follow that same pattern here.)

Brian

> +	xfs_trans_add_item(tp, &buip->bui_item);
> +	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
> +	return &buip->bui_item;
> +}
> +
>  static const struct xfs_item_ops xfs_bui_item_ops = {
>  	.iop_size	= xfs_bui_item_size,
>  	.iop_format	= xfs_bui_item_format,
> @@ -542,6 +566,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
>  	.iop_release	= xfs_bui_item_release,
>  	.iop_recover	= xfs_bui_item_recover,
>  	.iop_match	= xfs_bui_item_match,
> +	.iop_relog	= xfs_bui_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
> index f544ec007c12..7c3eebfd9064 100644
> --- a/fs/xfs/xfs_extfree_item.c
> +++ b/fs/xfs/xfs_extfree_item.c
> @@ -648,6 +648,34 @@ xfs_efi_item_match(
>  	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_efi_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_efd_log_item		*efdp;
> +	struct xfs_efi_log_item		*efip;
> +	struct xfs_extent		*extp;
> +	unsigned int			count;
> +
> +	count = EFI_ITEM(intent)->efi_format.efi_nextents;
> +	extp = EFI_ITEM(intent)->efi_format.efi_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
> +	efdp->efd_next_extent = count;
> +	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
> +	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
> +
> +	efip = xfs_efi_init(tp->t_mountp, count);
> +	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
> +	atomic_set(&efip->efi_next_extent, count);
> +	xfs_trans_add_item(tp, &efip->efi_item);
> +	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
> +	return &efip->efi_item;
> +}
> +
>  static const struct xfs_item_ops xfs_efi_item_ops = {
>  	.iop_size	= xfs_efi_item_size,
>  	.iop_format	= xfs_efi_item_format,
> @@ -655,6 +683,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
>  	.iop_release	= xfs_efi_item_release,
>  	.iop_recover	= xfs_efi_item_recover,
>  	.iop_match	= xfs_efi_item_match,
> +	.iop_relog	= xfs_efi_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
> index 30e579b5857d..9af7099e2a86 100644
> --- a/fs/xfs/xfs_refcount_item.c
> +++ b/fs/xfs/xfs_refcount_item.c
> @@ -566,6 +566,32 @@ xfs_cui_item_match(
>  	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_cui_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_cud_log_item		*cudp;
> +	struct xfs_cui_log_item		*cuip;
> +	struct xfs_phys_extent		*extp;
> +	unsigned int			count;
> +
> +	count = CUI_ITEM(intent)->cui_format.cui_nextents;
> +	extp = CUI_ITEM(intent)->cui_format.cui_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
> +	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
> +
> +	cuip = xfs_cui_init(tp->t_mountp, count);
> +	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
> +	atomic_set(&cuip->cui_next_extent, count);
> +	xfs_trans_add_item(tp, &cuip->cui_item);
> +	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
> +	return &cuip->cui_item;
> +}
> +
>  static const struct xfs_item_ops xfs_cui_item_ops = {
>  	.iop_size	= xfs_cui_item_size,
>  	.iop_format	= xfs_cui_item_format,
> @@ -573,6 +599,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
>  	.iop_release	= xfs_cui_item_release,
>  	.iop_recover	= xfs_cui_item_recover,
>  	.iop_match	= xfs_cui_item_match,
> +	.iop_relog	= xfs_cui_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
> index 34dd6b02bdc8..535b0358e0aa 100644
> --- a/fs/xfs/xfs_rmap_item.c
> +++ b/fs/xfs/xfs_rmap_item.c
> @@ -589,6 +589,32 @@ xfs_rui_item_match(
>  	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
>  }
>  
> +/* Relog an intent item to push the log tail forward. */
> +static struct xfs_log_item *
> +xfs_rui_item_relog(
> +	struct xfs_log_item		*intent,
> +	struct xfs_trans		*tp)
> +{
> +	struct xfs_rud_log_item		*rudp;
> +	struct xfs_rui_log_item		*ruip;
> +	struct xfs_map_extent		*extp;
> +	unsigned int			count;
> +
> +	count = RUI_ITEM(intent)->rui_format.rui_nextents;
> +	extp = RUI_ITEM(intent)->rui_format.rui_extents;
> +
> +	tp->t_flags |= XFS_TRANS_DIRTY;
> +	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
> +	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
> +
> +	ruip = xfs_rui_init(tp->t_mountp, count);
> +	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
> +	atomic_set(&ruip->rui_next_extent, count);
> +	xfs_trans_add_item(tp, &ruip->rui_item);
> +	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
> +	return &ruip->rui_item;
> +}
> +
>  static const struct xfs_item_ops xfs_rui_item_ops = {
>  	.iop_size	= xfs_rui_item_size,
>  	.iop_format	= xfs_rui_item_format,
> @@ -596,6 +622,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
>  	.iop_release	= xfs_rui_item_release,
>  	.iop_recover	= xfs_rui_item_recover,
>  	.iop_match	= xfs_rui_item_match,
> +	.iop_relog	= xfs_rui_item_relog,
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
> index a3a35a2d8ed9..362c155be525 100644
> --- a/fs/xfs/xfs_trace.h
> +++ b/fs/xfs/xfs_trace.h
> @@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
>  DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
>  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
>  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
> +DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
>  
>  #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
>  DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
> diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
> index 995c1513693c..e838e8327510 100644
> --- a/fs/xfs/xfs_trans.h
> +++ b/fs/xfs/xfs_trans.h
> @@ -78,6 +78,8 @@ struct xfs_item_ops {
>  	int (*iop_recover)(struct xfs_log_item *lip,
>  			   struct xfs_defer_capture **dfcp);
>  	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
> +	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
> +			struct xfs_trans *tp);
>  };
>  
>  /*
> @@ -239,4 +241,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
>  
>  extern kmem_zone_t	*xfs_trans_zone;
>  
> +static inline struct xfs_log_item *
> +xfs_trans_item_relog(
> +	struct xfs_log_item	*lip,
> +	struct xfs_trans	*tp)
> +{
> +	return lip->li_ops->iop_relog(lip, tp);
> +}
> +
>  #endif	/* __XFS_TRANS_H__ */
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items
  2020-09-17  3:30 ` [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items Darrick J. Wong
@ 2020-09-17 15:28   ` Brian Foster
  2020-09-22 15:51     ` Darrick J. Wong
  0 siblings, 1 reply; 13+ messages in thread
From: Brian Foster @ 2020-09-17 15:28 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, david

On Wed, Sep 16, 2020 at 08:30:09PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Now that we've landed a means for the defer ops manager to ask log items
> to relog themselves to move the log tail forward, we can improve how we
> decide when to relog so that we're not just using an arbitrary hardcoded
> value.
> 
> The XFS log has "push threshold", which tells us how far we'd have to
> move the log tail forward to keep 25% of the ondisk log space available.
> We use this threshold to decide when to force defer ops chains to relog
> themselves.  This avoids unnecessary relogging (which adds extra steps
> to metadata updates) while helping us to avoid pinning the tail.
> 
> A better algorithm would be to relog only when we detect that the time
> required to move the tail forward is greater than the time remaining
> before all the log space gets used up, but letting the upper levels
> drive the relogging means that it is difficult to coordinate relogging
> the lowest LSN'd intents first.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---

FYI, the commit log doesn't match the git branch referenced in the cover
letter.

>  fs/xfs/libxfs/xfs_defer.c |   32 +++++++++++++++++++++++++-------
>  fs/xfs/xfs_log.c          |   41 +++++++++++++++++++++++++++++++----------
>  fs/xfs/xfs_log.h          |    2 ++
>  3 files changed, 58 insertions(+), 17 deletions(-)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> index 7938e4d3af90..97ec36f32a0a 100644
> --- a/fs/xfs/libxfs/xfs_defer.c
> +++ b/fs/xfs/libxfs/xfs_defer.c
> @@ -17,6 +17,7 @@
>  #include "xfs_inode_item.h"
>  #include "xfs_trace.h"
>  #include "xfs_icache.h"
> +#include "xfs_log.h"
>  
>  /*
>   * Deferred Operations in XFS
> @@ -372,15 +373,35 @@ xfs_defer_relog(
>  	struct list_head		*dfops)
>  {
>  	struct xfs_defer_pending	*dfp;
> +	xfs_lsn_t			threshold_lsn;
>  
>  	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
>  
> +	/*
> +	 * Figure out where we need the tail to be in order to maintain the
> +	 * minimum required free space in the log.
> +	 */
> +	threshold_lsn = xlog_grant_push_threshold((*tpp)->t_mountp->m_log, 0);
> +	if (threshold_lsn == NULLCOMMITLSN)
> +		return 0;
> +
>  	list_for_each_entry(dfp, dfops, dfp_list) {
> +		/*
> +		 * If the log intent item for this deferred op is behind the
> +		 * threshold, we're running out of space and need to relog it
> +		 * to release the tail.
> +		 */
> +		if (dfp->dfp_intent == NULL ||

Any reason the NULL check isn't in the previous patch?

> +		    XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) < 0)
> +			continue;
> +

Logic looks backwards, we should relog (not skip) if li_lsn is within
the threshold, right?

>  		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
>  		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
>  	}
>  
> -	return xfs_defer_trans_roll(tpp);
> +	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
> +		return xfs_defer_trans_roll(tpp);

I suspect this churn is eliminated if this code uses the threshold logic
from the start..

> +	return 0;
>  }
>  
>  /*
> @@ -444,7 +465,6 @@ xfs_defer_finish_noroll(
>  	struct xfs_trans		**tp)
>  {
>  	struct xfs_defer_pending	*dfp;
> -	unsigned int			nr_rolls = 0;
>  	int				error = 0;
>  	LIST_HEAD(dop_pending);
>  
> @@ -471,11 +491,9 @@ xfs_defer_finish_noroll(
>  			goto out_shutdown;
>  
>  		/* Every few rolls we relog all the intent items. */
> -		if (!(++nr_rolls % 7)) {
> -			error = xfs_defer_relog(tp, &dop_pending);
> -			if (error)
> -				goto out_shutdown;
> -		}
> +		error = xfs_defer_relog(tp, &dop_pending);
> +		if (error)
> +			goto out_shutdown;
>  
>  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
>  				       dfp_list);
> diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
> index ad0c69ee8947..62c9e0aaa7df 100644
> --- a/fs/xfs/xfs_log.c
> +++ b/fs/xfs/xfs_log.c
> @@ -1475,14 +1475,15 @@ xlog_commit_record(
>  }
>  
>  /*
> - * Push on the buffer cache code if we ever use more than 75% of the on-disk
> - * log space.  This code pushes on the lsn which would supposedly free up
> - * the 25% which we want to leave free.  We may need to adopt a policy which
> - * pushes on an lsn which is further along in the log once we reach the high
> - * water mark.  In this manner, we would be creating a low water mark.
> + * Compute the LSN push target needed to push on the buffer cache code if we
> + * ever use more than 75% of the on-disk log space.  This code pushes on the
> + * lsn which would supposedly free up the 25% which we want to leave free.  We
> + * may need to adopt a policy which pushes on an lsn which is further along in
> + * the log once we reach the high water mark.  In this manner, we would be
> + * creating a low water mark.
>   */
> -STATIC void
> -xlog_grant_push_ail(
> +xfs_lsn_t
> +xlog_grant_push_threshold(
>  	struct xlog	*log,
>  	int		need_bytes)
>  {
> @@ -1508,7 +1509,7 @@ xlog_grant_push_ail(
>  	free_threshold = max(free_threshold, (log->l_logBBsize >> 2));
>  	free_threshold = max(free_threshold, 256);
>  	if (free_blocks >= free_threshold)
> -		return;
> +		return NULLCOMMITLSN;
>  
>  	xlog_crack_atomic_lsn(&log->l_tail_lsn, &threshold_cycle,
>  						&threshold_block);
> @@ -1528,13 +1529,33 @@ xlog_grant_push_ail(
>  	if (XFS_LSN_CMP(threshold_lsn, last_sync_lsn) > 0)
>  		threshold_lsn = last_sync_lsn;
>  
> +	return threshold_lsn;
> +}
> +
> +/*
> + * Push on the buffer cache code if we ever use more than 75% of the on-disk
> + * log space.  This code pushes on the lsn which would supposedly free up
> + * the 25% which we want to leave free.  We may need to adopt a policy which
> + * pushes on an lsn which is further along in the log once we reach the high
> + * water mark.  In this manner, we would be creating a low water mark.
> + */
> +STATIC void
> +xlog_grant_push_ail(
> +	struct xlog	*log,
> +	int		need_bytes)
> +{
> +	xfs_lsn_t	threshold_lsn;
> +
> +	threshold_lsn = xlog_grant_push_threshold(log, need_bytes);
> +	if (threshold_lsn == NULLCOMMITLSN || XLOG_FORCED_SHUTDOWN(log))
> +		return;
> +
>  	/*
>  	 * Get the transaction layer to kick the dirty buffers out to
>  	 * disk asynchronously. No point in trying to do this if
>  	 * the filesystem is shutting down.
>  	 */
> -	if (!XLOG_FORCED_SHUTDOWN(log))
> -		xfs_ail_push(log->l_ailp, threshold_lsn);
> +	xfs_ail_push(log->l_ailp, threshold_lsn);
>  }

Separate refactoring patch for the xfs_log.c bits, please.

Brian

>  
>  /*
> diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h
> index 1412d6993f1e..58c3fcbec94a 100644
> --- a/fs/xfs/xfs_log.h
> +++ b/fs/xfs/xfs_log.h
> @@ -141,4 +141,6 @@ void	xfs_log_quiesce(struct xfs_mount *mp);
>  bool	xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t);
>  bool	xfs_log_in_recovery(struct xfs_mount *);
>  
> +xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes);
> +
>  #endif	/* __XFS_LOG_H__ */
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished
  2020-09-17 15:27   ` Brian Foster
@ 2020-09-17 16:38     ` Darrick J. Wong
  0 siblings, 0 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-17 16:38 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs, david

On Thu, Sep 17, 2020 at 11:27:55AM -0400, Brian Foster wrote:
> On Wed, Sep 16, 2020 at 08:29:53PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > The defer ops code has been finishing items in the wrong order -- if a
> > top level defer op creates items A and B, and finishing item A creates
> > more defer ops A1 and A2, we'll put the new items on the end of the
> > chain and process them in the order A B A1 A2.  This is kind of weird,
> > since it's convenient for programmers to be able to think of A and B as
> > an ordered sequence where all the sub-tasks for A must finish before we
> > move on to B, e.g. A A1 A2 D.
> > 
> > Right now, our log intent items are not so complex that this matters,
> > but this will become important for the atomic extent swapping patchset.
> > In order to maintain correct reference counting of extents, we have to
> > unmap and remap extents in that order, and we want to complete that work
> > before moving on to the next range that the user wants to swap.  This
> > patch fixes defer ops to satsify that requirement.
> > 
> > The primary symptom of the incorrect order was noticed in an early
> > performance analysis of the atomic extent swap code.  An astonishingly
> > large number of deferred work items accumulated when userspace requested
> > an atomic update of two very fragmented files.  The cause of this was
> > traced to the same ordering bug in the inner loop of
> > xfs_defer_finish_noroll.
> > 
> > If the ->finish_item method of a deferred operation queues new deferred
> > operations, those new deferred ops are appended to the tail of the
> > pending work list.  To illustrate, say that a caller creates a
> > transaction t0 with four deferred operations D0-D3.  The first thing
> > defer ops does is roll the transaction to t1, leaving us with:
> > 
> > t1: D0(t0), D1(t0), D2(t0), D3(t0)
> > 
> > Let's say that finishing each of D0-D3 will create two new deferred ops.
> > After finish D0 and roll, we'll have the following chain:
> > 
> > t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)
> > 
> > d4 and d5 were logged to t1.  Notice that while we're about to start
> > work on D1, we haven't actually completed all the work implied by D0
> > being finished.  So far we've been careful (or lucky) to structure the
> > dfops callers such that D1 doesn't depend on d4 or d5 being finished,
> > but this is a potential logic bomb.
> > 
> > There's a second problem lurking.  Let's see what happens as we finish
> > D1-D3:
> > 
> > t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2)
> > t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3)
> > t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
> > 
> > Let's say that d4-d11 are simple work items that don't queue any other
> > operations, which means that we can complete each d4 and roll to t6:
> > 
> > t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
> > t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
> > ...
> > t11: d10(t4), d11(t4)
> > t12: d11(t4)
> > <done>
> > 
> > When we try to roll to transaction #12, we're holding defer op d11,
> > which we logged way back in t4.  This means that the tail of the log is
> > pinned at t4.  If the log is very small or there are a lot of other
> > threads updating metadata, this means that we might have wrapped the log
> > and cannot get roll to t11 because there isn't enough space left before
> > we'd run into t4.
> > 
> > Let's shift back to the original failure.  I mentioned before that I
> > discovered this flaw while developing the atomic file update code.  In
> > that scenario, we have a defer op (D0) that finds a range of file blocks
> > to remap, creates a handful of new defer ops to do that, and then asks
> > to be continued with however much work remains.
> > 
> > So, D0 is the original swapext deferred op.  The first thing defer ops
> > does is rolls to t1:
> > 
> > t1: D0(t0)
> > 
> > We try to finish D0, logging d1 and d2 in the process, but can't get all
> > the work done.  We log a done item and a new intent item for the work
> > that D0 still has to do, and roll to t2:
> > 
> > t2: D0'(t1), d1(t1), d2(t1)
> > 
> > We roll and try to finish D0', but still can't get all the work done, so
> > we log a done item and a new intent item for it, requeue D0 a second
> > time, and roll to t3:
> > 
> > t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)
> > 
> > If it takes 48 more rolls to complete D0, then we'll finally dispense
> > with D0 in t50:
> > 
> > t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)
> > 
> > We then try to roll again to get a chain like this:
> > 
> > t51: d1(t1), d2(t1), ..., d101(t50), d102(t50)
> > ...
> > t152: d102(t50)
> > <done>
> > 
> > Notice that in rolling to transaction #51, we're holding on to a log
> > intent item for d1 that was logged in transaction #1.  This means that
> > the tail of the log is pinned at t1.  If the log is very small or there
> > are a lot of other threads updating metadata, this means that we might
> > have wrapped the log and cannot roll to t51 because there isn't enough
> > space left before we'd run into t1.  This is of course problem #2 again.
> > 
> > But notice the third problem with this scenario: we have 102 defer ops
> > tied to this transaction!  Each of these items are backed by pinned
> > kernel memory, which means that we risk OOM if the chains get too long.
> > 
> > Yikes.  Problem #1 is a subtle logic bomb that could hit someone in the
> > future; problem #2 applies (rarely) to the current upstream, and problem
> > #3 applies to work under development.
> > 
> > This is not how incremental deferred operations were supposed to work.
> > The dfops design of logging in the same transaction an intent-done item
> > and a new intent item for the work remaining was to make it so that we
> > only have to juggle enough deferred work items to finish that one small
> > piece of work.  Deferred log item recovery will find that first
> > unfinished work item and restart it, no matter how many other intent
> > items might follow it in the log.  Therefore, it's ok to put the new
> > intents at the start of the dfops chain.
> > 
> > For the first example, the chains look like this:
> > 
> > t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> > t3: d5(t1), D1(t0), D2(t0), D3(t0)
> > ...
> > t9: d9(t7), D3(t0)
> > t10: D3(t0)
> > t11: d10(t10), d11(t10)
> > t12: d11(t10)
> > 
> > For the second example, the chains look like this:
> > 
> > t1: D0(t0)
> > t2: d1(t1), d2(t1), D0'(t1)
> > t3: d2(t1), D0'(t1)
> > t4: D0'(t1)
> > t5: d1(t4), d2(t4), D0''(t4)
> > ...
> > t148: D0<50 primes>(t147)
> > t149: d101(t148), d102(t148)
> > t150: d102(t148)
> > <done>
> > 
> 
> Ok, this one looks funny at first glance, but I think it makes sense..
> If I use a simple example and assume D0(t0) is a large unmap intent and
> the subsequent child dfops (d1, d2) are extent frees, this essentially
> means that we'd unmap an (incomplete) range, defer the frees for the
> range we did unmap, process those frees, then get back around to the
> unmap and pick up the next chunk for D0' (and so on). That would be
> instead of the current behavior which presumably process the unmap (D0)
> continuously and queues up all of the frees (dN) until the former is
> 100% complete. Right?

Correct.

> > This actually sucks more for pinning the log tail (we try to roll to t10
> > while holding an intent item that was logged in t1) but we've solved
> > problem #1.  We've also reduced the maximum chain length from:
> > 
> >     sum(all the new items) + nr_original_items
> > 
> > to:
> > 
> >     max(new items that each original item creates) + nr_original_items
> > 
> > This solves problem #3 by sharply reducing the number of defer ops that
> > can be attached to a transaction at any given time.  The change makes
> > the problem of log tail pinning worse, but is improvement we need to
> > solve problem #2.  Actually solving #2, however, is left to the next
> > patch.
> > 
> > Note that a subsequent analysis of some hard-to-trigger reflink and COW
> > livelocks on extremely fragmented filesystems (or systems running a lot
> > of IO threads) showed the same symptoms -- uncomfortably large numbers
> > of incore deferred work items and occasional stalls in the transaction
> > grant code while waiting for log reservations.  I think this patch and
> > the next one will also solve these problems.
> > 
> > As originally written, the code used list_splice_tail_init instead of
> > list_splice_init, so change that, and leave a short comment explaining
> > our actions.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/libxfs/xfs_defer.c |   11 ++++++++++-
> >  1 file changed, 10 insertions(+), 1 deletion(-)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > index 97523b394932..84a70edd0da1 100644
> > --- a/fs/xfs/libxfs/xfs_defer.c
> > +++ b/fs/xfs/libxfs/xfs_defer.c
> > @@ -431,8 +431,17 @@ xfs_defer_finish_noroll(
> >  
> >  	/* Until we run out of pending work to finish... */
> >  	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
> > +		/*
> > +		 * Deferred items that are created in the process of finishing
> > +		 * other deferred work items should be queued at the head of
> > +		 * the pending list, which puts them ahead of the deferred work
> > +		 * that was created by the caller.  This keeps the number of
> > +		 * pending work items to a minimum, which decreases the amount
> > +		 * of time that any one intent item can stick around in memory,
> > +		 * pinning the log tail.
> > +		 */
> >  		xfs_defer_create_intents(*tp);
> > -		list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
> > +		list_splice_init(&(*tp)->t_dfops, &dop_pending);
> 
> This has me wondering whether we should consider explicit separation of
> chains through some kind of parent/child relationship to make the
> behavior and implementation more clear to reason about.

I thought about that; the difficulty is that we don't put any particular
limits on the amount of dfops children you can create.  Recalling the CS
textbooks of my childhood, the old code effectively flattened the
parent-child dfops tree into a breadth-first list for traversal, whereas
this new code flattens that "tree" into depth-first order.

(Except it's not a tree, and the requeueing ability makes the
tree-traversal analogy leak like Firefox running modern Faceook.)

> That's a bigger
> picture topic, though. Assuming I'm following things correctly above,
> this looks reasonable to me:

Yup, you're following things correctly.  Thanks for reviewing!

--D

> Reviewed-by: Brian Foster <bfoster@redhat.com>
> 
> >  
> >  		error = xfs_defer_trans_roll(tp);
> >  		if (error)
> > 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] xfs: periodically relog deferred intent items
  2020-09-17 15:28   ` Brian Foster
@ 2020-09-18  0:36     ` Darrick J. Wong
  0 siblings, 0 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-18  0:36 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs, david

On Thu, Sep 17, 2020 at 11:28:16AM -0400, Brian Foster wrote:
> On Wed, Sep 16, 2020 at 08:30:00PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > There's a subtle design flaw in the deferred log item code that can lead
> > to pinning the log tail.  Taking up the defer ops chain examples from
> > the previous commit, we can get trapped in sequences like this:
> > 
> > Caller hands us a transaction t0 with D0-D3 attached.  The defer ops
> > chain will look like the following if the transaction rolls succeed:
> > 
> > t1: D0(t0), D1(t0), D2(t0), D3(t0)
> > t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0)
> > t3: d5(t1), D1(t0), D2(t0), D3(t0)
> > ...
> > t9: d9(t7), D3(t0)
> > t10: D3(t0)
> > t11: d10(t10), d11(t10)
> > t12: d11(t10)
> > 
> > In transaction 9, we finish d9 and try to roll to t10 while holding onto
> > an intent item for D3 that we logged in t0.
> > 
> > The previous commit changed the order in which we place new defer ops in
> > the defer ops processing chain to reduce the maximum chain length.  Now
> > make xfs_defer_finish_noroll capable of relogging the entire chain
> > periodically so that we can always move the log tail forward.  We do
> > this every seven loops, having observed that while most chains never
> > exceed seven items in length, the rest go far over that and seem to
> > be involved in most of the stall problems.
> > 
> > Callers are now required to ensure that the transaction reservation is
> > large enough to handle logging done items and new intent items for the
> > maximum possible chain length.  Most callers are careful to keep the
> > chain lengths low, so the overhead should be minimal.
> > 
> > (Note that in the next patch we'll make it so that we only relog on
> > demand, since 7 is an arbitrary number that I used here to get the basic
> > mechanics working.)
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/libxfs/xfs_defer.c  |   30 ++++++++++++++++++++++++++++++
> >  fs/xfs/xfs_bmap_item.c     |   25 +++++++++++++++++++++++++
> >  fs/xfs/xfs_extfree_item.c  |   29 +++++++++++++++++++++++++++++
> >  fs/xfs/xfs_refcount_item.c |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_rmap_item.c     |   27 +++++++++++++++++++++++++++
> >  fs/xfs/xfs_trace.h         |    1 +
> >  fs/xfs/xfs_trans.h         |   10 ++++++++++
> >  7 files changed, 149 insertions(+)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > index 84a70edd0da1..7938e4d3af90 100644
> > --- a/fs/xfs/libxfs/xfs_defer.c
> > +++ b/fs/xfs/libxfs/xfs_defer.c
> > @@ -361,6 +361,28 @@ xfs_defer_cancel_list(
> >  	}
> >  }
> >  
> > +/*
> > + * Prevent a log intent item from pinning the tail of the log by logging a
> > + * done item to release the intent item; and then log a new intent item.
> > + * The caller should provide a fresh transaction and roll it after we're done.
> > + */
> > +static int
> > +xfs_defer_relog(
> > +	struct xfs_trans		**tpp,
> > +	struct list_head		*dfops)
> > +{
> > +	struct xfs_defer_pending	*dfp;
> > +
> > +	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> > +
> > +	list_for_each_entry(dfp, dfops, dfp_list) {
> > +		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> > +		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> > +	}
> > +
> > +	return xfs_defer_trans_roll(tpp);
> > +}
> > +
> >  /*
> >   * Log an intent-done item for the first pending intent, and finish the work
> >   * items.
> > @@ -422,6 +444,7 @@ xfs_defer_finish_noroll(
> >  	struct xfs_trans		**tp)
> >  {
> >  	struct xfs_defer_pending	*dfp;
> > +	unsigned int			nr_rolls = 0;
> >  	int				error = 0;
> >  	LIST_HEAD(dop_pending);
> >  
> > @@ -447,6 +470,13 @@ xfs_defer_finish_noroll(
> >  		if (error)
> >  			goto out_shutdown;
> >  
> > +		/* Every few rolls we relog all the intent items. */
> > +		if (!(++nr_rolls % 7)) {
> > +			error = xfs_defer_relog(tp, &dop_pending);
> > +			if (error)
> > +				goto out_shutdown;
> > +		}
> > +
> 
> I thought the purpose of the push threshold detection was to avoid this
> hardcoded relog logic..? On a quick look, I see the next patch
> immediately replaces this. That should probably be refactored such that
> the proper code is introduced from the start...

Ok.  I was keeping the mechanism & policy changes separate, but
admittedly it doesn't make that much sense, so I'll combine them with
the thing Dave suggested.

> >  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
> >  				       dfp_list);
> >  		error = xfs_defer_finish_one(*tp, dfp);
> > diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
> > index 8461285a9dd9..2d4c60776714 100644
> > --- a/fs/xfs/xfs_bmap_item.c
> > +++ b/fs/xfs/xfs_bmap_item.c
> > @@ -535,6 +535,30 @@ xfs_bui_item_match(
> >  	return BUI_ITEM(lip)->bui_format.bui_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_bui_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_bud_log_item		*budp;
> > +	struct xfs_bui_log_item		*buip;
> > +	struct xfs_map_extent		*extp;
> > +
> > +	extp = BUI_ITEM(intent)->bui_format.bui_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	budp = xfs_trans_get_bud(tp, BUI_ITEM(intent));
> > +	set_bit(XFS_LI_DIRTY, &budp->bud_item.li_flags);
> > +
> > +	buip = xfs_bui_init(tp->t_mountp);
> > +	memcpy(buip->bui_format.bui_extents, extp, sizeof(*extp));
> > +	atomic_set(&buip->bui_next_extent, 1);
> 
> I was a little confused by the hardcoded value, then I saw that the max
> value is one. Since we have the extent array and ->bui_next_extent
> fields, perhaps this would be more robust if coded to copy those such
> that it wouldn't break if the max value changed..? E.g., memcpy() the maps *
> bui_nextents and copy the next_extent index directly.
> 
> (Looking further, I see that the other intent relog handlers follow that
> pattern presumably because they allow a count > 1. So IOW, I'd just
> follow that same pattern here.)

Ok, fixed.

--D

> Brian
> 
> > +	xfs_trans_add_item(tp, &buip->bui_item);
> > +	set_bit(XFS_LI_DIRTY, &buip->bui_item.li_flags);
> > +	return &buip->bui_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_bui_item_ops = {
> >  	.iop_size	= xfs_bui_item_size,
> >  	.iop_format	= xfs_bui_item_format,
> > @@ -542,6 +566,7 @@ static const struct xfs_item_ops xfs_bui_item_ops = {
> >  	.iop_release	= xfs_bui_item_release,
> >  	.iop_recover	= xfs_bui_item_recover,
> >  	.iop_match	= xfs_bui_item_match,
> > +	.iop_relog	= xfs_bui_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c
> > index f544ec007c12..7c3eebfd9064 100644
> > --- a/fs/xfs/xfs_extfree_item.c
> > +++ b/fs/xfs/xfs_extfree_item.c
> > @@ -648,6 +648,34 @@ xfs_efi_item_match(
> >  	return EFI_ITEM(lip)->efi_format.efi_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_efi_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_efd_log_item		*efdp;
> > +	struct xfs_efi_log_item		*efip;
> > +	struct xfs_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = EFI_ITEM(intent)->efi_format.efi_nextents;
> > +	extp = EFI_ITEM(intent)->efi_format.efi_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	efdp = xfs_trans_get_efd(tp, EFI_ITEM(intent), count);
> > +	efdp->efd_next_extent = count;
> > +	memcpy(efdp->efd_format.efd_extents, extp, count * sizeof(*extp));
> > +	set_bit(XFS_LI_DIRTY, &efdp->efd_item.li_flags);
> > +
> > +	efip = xfs_efi_init(tp->t_mountp, count);
> > +	memcpy(efip->efi_format.efi_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&efip->efi_next_extent, count);
> > +	xfs_trans_add_item(tp, &efip->efi_item);
> > +	set_bit(XFS_LI_DIRTY, &efip->efi_item.li_flags);
> > +	return &efip->efi_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_efi_item_ops = {
> >  	.iop_size	= xfs_efi_item_size,
> >  	.iop_format	= xfs_efi_item_format,
> > @@ -655,6 +683,7 @@ static const struct xfs_item_ops xfs_efi_item_ops = {
> >  	.iop_release	= xfs_efi_item_release,
> >  	.iop_recover	= xfs_efi_item_recover,
> >  	.iop_match	= xfs_efi_item_match,
> > +	.iop_relog	= xfs_efi_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
> > index 30e579b5857d..9af7099e2a86 100644
> > --- a/fs/xfs/xfs_refcount_item.c
> > +++ b/fs/xfs/xfs_refcount_item.c
> > @@ -566,6 +566,32 @@ xfs_cui_item_match(
> >  	return CUI_ITEM(lip)->cui_format.cui_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_cui_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_cud_log_item		*cudp;
> > +	struct xfs_cui_log_item		*cuip;
> > +	struct xfs_phys_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = CUI_ITEM(intent)->cui_format.cui_nextents;
> > +	extp = CUI_ITEM(intent)->cui_format.cui_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	cudp = xfs_trans_get_cud(tp, CUI_ITEM(intent));
> > +	set_bit(XFS_LI_DIRTY, &cudp->cud_item.li_flags);
> > +
> > +	cuip = xfs_cui_init(tp->t_mountp, count);
> > +	memcpy(cuip->cui_format.cui_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&cuip->cui_next_extent, count);
> > +	xfs_trans_add_item(tp, &cuip->cui_item);
> > +	set_bit(XFS_LI_DIRTY, &cuip->cui_item.li_flags);
> > +	return &cuip->cui_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_cui_item_ops = {
> >  	.iop_size	= xfs_cui_item_size,
> >  	.iop_format	= xfs_cui_item_format,
> > @@ -573,6 +599,7 @@ static const struct xfs_item_ops xfs_cui_item_ops = {
> >  	.iop_release	= xfs_cui_item_release,
> >  	.iop_recover	= xfs_cui_item_recover,
> >  	.iop_match	= xfs_cui_item_match,
> > +	.iop_relog	= xfs_cui_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
> > index 34dd6b02bdc8..535b0358e0aa 100644
> > --- a/fs/xfs/xfs_rmap_item.c
> > +++ b/fs/xfs/xfs_rmap_item.c
> > @@ -589,6 +589,32 @@ xfs_rui_item_match(
> >  	return RUI_ITEM(lip)->rui_format.rui_id == intent_id;
> >  }
> >  
> > +/* Relog an intent item to push the log tail forward. */
> > +static struct xfs_log_item *
> > +xfs_rui_item_relog(
> > +	struct xfs_log_item		*intent,
> > +	struct xfs_trans		*tp)
> > +{
> > +	struct xfs_rud_log_item		*rudp;
> > +	struct xfs_rui_log_item		*ruip;
> > +	struct xfs_map_extent		*extp;
> > +	unsigned int			count;
> > +
> > +	count = RUI_ITEM(intent)->rui_format.rui_nextents;
> > +	extp = RUI_ITEM(intent)->rui_format.rui_extents;
> > +
> > +	tp->t_flags |= XFS_TRANS_DIRTY;
> > +	rudp = xfs_trans_get_rud(tp, RUI_ITEM(intent));
> > +	set_bit(XFS_LI_DIRTY, &rudp->rud_item.li_flags);
> > +
> > +	ruip = xfs_rui_init(tp->t_mountp, count);
> > +	memcpy(ruip->rui_format.rui_extents, extp, count * sizeof(*extp));
> > +	atomic_set(&ruip->rui_next_extent, count);
> > +	xfs_trans_add_item(tp, &ruip->rui_item);
> > +	set_bit(XFS_LI_DIRTY, &ruip->rui_item.li_flags);
> > +	return &ruip->rui_item;
> > +}
> > +
> >  static const struct xfs_item_ops xfs_rui_item_ops = {
> >  	.iop_size	= xfs_rui_item_size,
> >  	.iop_format	= xfs_rui_item_format,
> > @@ -596,6 +622,7 @@ static const struct xfs_item_ops xfs_rui_item_ops = {
> >  	.iop_release	= xfs_rui_item_release,
> >  	.iop_recover	= xfs_rui_item_recover,
> >  	.iop_match	= xfs_rui_item_match,
> > +	.iop_relog	= xfs_rui_item_relog,
> >  };
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
> > index a3a35a2d8ed9..362c155be525 100644
> > --- a/fs/xfs/xfs_trace.h
> > +++ b/fs/xfs/xfs_trace.h
> > @@ -2533,6 +2533,7 @@ DEFINE_DEFER_PENDING_EVENT(xfs_defer_create_intent);
> >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_cancel_list);
> >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_finish);
> >  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
> > +DEFINE_DEFER_PENDING_EVENT(xfs_defer_relog_intent);
> >  
> >  #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
> >  DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
> > diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
> > index 995c1513693c..e838e8327510 100644
> > --- a/fs/xfs/xfs_trans.h
> > +++ b/fs/xfs/xfs_trans.h
> > @@ -78,6 +78,8 @@ struct xfs_item_ops {
> >  	int (*iop_recover)(struct xfs_log_item *lip,
> >  			   struct xfs_defer_capture **dfcp);
> >  	bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
> > +	struct xfs_log_item *(*iop_relog)(struct xfs_log_item *intent,
> > +			struct xfs_trans *tp);
> >  };
> >  
> >  /*
> > @@ -239,4 +241,12 @@ void		xfs_trans_buf_copy_type(struct xfs_buf *dst_bp,
> >  
> >  extern kmem_zone_t	*xfs_trans_zone;
> >  
> > +static inline struct xfs_log_item *
> > +xfs_trans_item_relog(
> > +	struct xfs_log_item	*lip,
> > +	struct xfs_trans	*tp)
> > +{
> > +	return lip->li_ops->iop_relog(lip, tp);
> > +}
> > +
> >  #endif	/* __XFS_TRANS_H__ */
> > 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items
  2020-09-17 15:28   ` Brian Foster
@ 2020-09-22 15:51     ` Darrick J. Wong
  0 siblings, 0 replies; 13+ messages in thread
From: Darrick J. Wong @ 2020-09-22 15:51 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs, david

On Thu, Sep 17, 2020 at 11:28:29AM -0400, Brian Foster wrote:
> On Wed, Sep 16, 2020 at 08:30:09PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Now that we've landed a means for the defer ops manager to ask log items
> > to relog themselves to move the log tail forward, we can improve how we
> > decide when to relog so that we're not just using an arbitrary hardcoded
> > value.
> > 
> > The XFS log has "push threshold", which tells us how far we'd have to
> > move the log tail forward to keep 25% of the ondisk log space available.
> > We use this threshold to decide when to force defer ops chains to relog
> > themselves.  This avoids unnecessary relogging (which adds extra steps
> > to metadata updates) while helping us to avoid pinning the tail.
> > 
> > A better algorithm would be to relog only when we detect that the time
> > required to move the tail forward is greater than the time remaining
> > before all the log space gets used up, but letting the upper levels
> > drive the relogging means that it is difficult to coordinate relogging
> > the lowest LSN'd intents first.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> 
> FYI, the commit log doesn't match the git branch referenced in the cover
> letter.

> >  fs/xfs/libxfs/xfs_defer.c |   32 +++++++++++++++++++++++++-------
> >  fs/xfs/xfs_log.c          |   41 +++++++++++++++++++++++++++++++----------
> >  fs/xfs/xfs_log.h          |    2 ++
> >  3 files changed, 58 insertions(+), 17 deletions(-)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
> > index 7938e4d3af90..97ec36f32a0a 100644
> > --- a/fs/xfs/libxfs/xfs_defer.c
> > +++ b/fs/xfs/libxfs/xfs_defer.c
> > @@ -17,6 +17,7 @@
> >  #include "xfs_inode_item.h"
> >  #include "xfs_trace.h"
> >  #include "xfs_icache.h"
> > +#include "xfs_log.h"
> >  
> >  /*
> >   * Deferred Operations in XFS
> > @@ -372,15 +373,35 @@ xfs_defer_relog(
> >  	struct list_head		*dfops)
> >  {
> >  	struct xfs_defer_pending	*dfp;
> > +	xfs_lsn_t			threshold_lsn;
> >  
> >  	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
> >  
> > +	/*
> > +	 * Figure out where we need the tail to be in order to maintain the
> > +	 * minimum required free space in the log.
> > +	 */
> > +	threshold_lsn = xlog_grant_push_threshold((*tpp)->t_mountp->m_log, 0);
> > +	if (threshold_lsn == NULLCOMMITLSN)
> > +		return 0;
> > +
> >  	list_for_each_entry(dfp, dfops, dfp_list) {
> > +		/*
> > +		 * If the log intent item for this deferred op is behind the
> > +		 * threshold, we're running out of space and need to relog it
> > +		 * to release the tail.
> > +		 */
> > +		if (dfp->dfp_intent == NULL ||
> 
> Any reason the NULL check isn't in the previous patch?

No.  Now that I've squashed all these patches together, all the wonky
churn should be eliminated.

> > +		    XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) < 0)
> > +			continue;
> > +
> 
> Logic looks backwards, we should relog (not skip) if li_lsn is within
> the threshold, right?

Oops, fixed now.

> >  		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
> >  		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
> >  	}
> >  
> > -	return xfs_defer_trans_roll(tpp);
> > +	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
> > +		return xfs_defer_trans_roll(tpp);
> 
> I suspect this churn is eliminated if this code uses the threshold logic
> from the start..
> 
> > +	return 0;
> >  }
> >  
> >  /*
> > @@ -444,7 +465,6 @@ xfs_defer_finish_noroll(
> >  	struct xfs_trans		**tp)
> >  {
> >  	struct xfs_defer_pending	*dfp;
> > -	unsigned int			nr_rolls = 0;
> >  	int				error = 0;
> >  	LIST_HEAD(dop_pending);
> >  
> > @@ -471,11 +491,9 @@ xfs_defer_finish_noroll(
> >  			goto out_shutdown;
> >  
> >  		/* Every few rolls we relog all the intent items. */
> > -		if (!(++nr_rolls % 7)) {
> > -			error = xfs_defer_relog(tp, &dop_pending);
> > -			if (error)
> > -				goto out_shutdown;
> > -		}
> > +		error = xfs_defer_relog(tp, &dop_pending);
> > +		if (error)
> > +			goto out_shutdown;
> >  
> >  		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
> >  				       dfp_list);
> > diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
> > index ad0c69ee8947..62c9e0aaa7df 100644
> > --- a/fs/xfs/xfs_log.c
> > +++ b/fs/xfs/xfs_log.c
> > @@ -1475,14 +1475,15 @@ xlog_commit_record(
> >  }
> >  
> >  /*
> > - * Push on the buffer cache code if we ever use more than 75% of the on-disk
> > - * log space.  This code pushes on the lsn which would supposedly free up
> > - * the 25% which we want to leave free.  We may need to adopt a policy which
> > - * pushes on an lsn which is further along in the log once we reach the high
> > - * water mark.  In this manner, we would be creating a low water mark.
> > + * Compute the LSN push target needed to push on the buffer cache code if we
> > + * ever use more than 75% of the on-disk log space.  This code pushes on the
> > + * lsn which would supposedly free up the 25% which we want to leave free.  We
> > + * may need to adopt a policy which pushes on an lsn which is further along in
> > + * the log once we reach the high water mark.  In this manner, we would be
> > + * creating a low water mark.
> >   */
> > -STATIC void
> > -xlog_grant_push_ail(
> > +xfs_lsn_t
> > +xlog_grant_push_threshold(
> >  	struct xlog	*log,
> >  	int		need_bytes)
> >  {
> > @@ -1508,7 +1509,7 @@ xlog_grant_push_ail(
> >  	free_threshold = max(free_threshold, (log->l_logBBsize >> 2));
> >  	free_threshold = max(free_threshold, 256);
> >  	if (free_blocks >= free_threshold)
> > -		return;
> > +		return NULLCOMMITLSN;
> >  
> >  	xlog_crack_atomic_lsn(&log->l_tail_lsn, &threshold_cycle,
> >  						&threshold_block);
> > @@ -1528,13 +1529,33 @@ xlog_grant_push_ail(
> >  	if (XFS_LSN_CMP(threshold_lsn, last_sync_lsn) > 0)
> >  		threshold_lsn = last_sync_lsn;
> >  
> > +	return threshold_lsn;
> > +}
> > +
> > +/*
> > + * Push on the buffer cache code if we ever use more than 75% of the on-disk
> > + * log space.  This code pushes on the lsn which would supposedly free up
> > + * the 25% which we want to leave free.  We may need to adopt a policy which
> > + * pushes on an lsn which is further along in the log once we reach the high
> > + * water mark.  In this manner, we would be creating a low water mark.
> > + */
> > +STATIC void
> > +xlog_grant_push_ail(
> > +	struct xlog	*log,
> > +	int		need_bytes)
> > +{
> > +	xfs_lsn_t	threshold_lsn;
> > +
> > +	threshold_lsn = xlog_grant_push_threshold(log, need_bytes);
> > +	if (threshold_lsn == NULLCOMMITLSN || XLOG_FORCED_SHUTDOWN(log))
> > +		return;
> > +
> >  	/*
> >  	 * Get the transaction layer to kick the dirty buffers out to
> >  	 * disk asynchronously. No point in trying to do this if
> >  	 * the filesystem is shutting down.
> >  	 */
> > -	if (!XLOG_FORCED_SHUTDOWN(log))
> > -		xfs_ail_push(log->l_ailp, threshold_lsn);
> > +	xfs_ail_push(log->l_ailp, threshold_lsn);
> >  }
> 
> Separate refactoring patch for the xfs_log.c bits, please.

Ok, fixed.

--D

> Brian
> 
> >  
> >  /*
> > diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h
> > index 1412d6993f1e..58c3fcbec94a 100644
> > --- a/fs/xfs/xfs_log.h
> > +++ b/fs/xfs/xfs_log.h
> > @@ -141,4 +141,6 @@ void	xfs_log_quiesce(struct xfs_mount *mp);
> >  bool	xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t);
> >  bool	xfs_log_in_recovery(struct xfs_mount *);
> >  
> > +xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes);
> > +
> >  #endif	/* __XFS_LOG_H__ */
> > 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-09-22 15:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-17  3:29 [PATCH 0/3] xfs: fix some log stalling problems in defer ops Darrick J. Wong
2020-09-17  3:29 ` [PATCH 1/3] xfs: change the order in which child and parent defer ops are finished Darrick J. Wong
2020-09-17  5:57   ` Dave Chinner
2020-09-17 15:27   ` Brian Foster
2020-09-17 16:38     ` Darrick J. Wong
2020-09-17  3:30 ` [PATCH 2/3] xfs: periodically relog deferred intent items Darrick J. Wong
2020-09-17  6:11   ` Dave Chinner
2020-09-17  7:18     ` Darrick J. Wong
2020-09-17 15:28   ` Brian Foster
2020-09-18  0:36     ` Darrick J. Wong
2020-09-17  3:30 ` [PATCH 3/3] xfs: use the log grant push threshold to decide if we're going to relog deferred items Darrick J. Wong
2020-09-17 15:28   ` Brian Foster
2020-09-22 15:51     ` Darrick J. Wong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.