All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gao Xiang <hsiangkao@redhat.com>
To: linux-xfs@vger.kernel.org
Cc: Gao Xiang <hsiangkao@redhat.com>, Zorro Lang <zlang@redhat.com>
Subject: [PATCH] xfs: don't use in-core per-cpu fdblocks for !lazysbcount
Date: Fri, 16 Apr 2021 17:10:23 +0800	[thread overview]
Message-ID: <20210416091023.2143162-1-hsiangkao@redhat.com> (raw)

There are many paths which could trigger xfs_log_sb(), e.g.
  xfs_bmap_add_attrfork()
    -> xfs_log_sb()
, which overrided on-disk fdblocks by in-core per-CPU fdblocks.

However, for !lazysbcount cases, on-disk fdblocks is actually updated
by xfs_trans_apply_sb_deltas(), and generally it isn't equal to
in-core fdblocks due to xfs_reserve_block() or whatever, see the
comment in xfs_unmountfs().

It could be observed by the following steps reported by Zorro [1]:

1. mkfs.xfs -f -l lazy-count=0 -m crc=0 $dev
2. mount $dev $mnt
3. fsstress -d $mnt -p 100 -n 1000 (maybe need more or less io load)
4. umount $mnt
5. xfs_repair -n $dev

yet due to commit f46e5a174655("xfs: fold sbcount quiesce logging
into log covering"), xfs_sync_sb() will be triggered even !lazysbcount
but xfs_log_need_covered() case when xfs_unmountfs(), so hard to
reproduce on kernel 5.12+.

After this patch, I've seen no strange so far on older kernels
for the testcase above without lazysbcount.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1949515

Reported-by: Zorro Lang <zlang@redhat.com>
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
---
 fs/xfs/libxfs/xfs_sb.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
index 60e6d255e5e2..423dada3f64c 100644
--- a/fs/xfs/libxfs/xfs_sb.c
+++ b/fs/xfs/libxfs/xfs_sb.c
@@ -928,7 +928,13 @@ xfs_log_sb(
 
 	mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount);
 	mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree);
-	mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks);
+	if (!xfs_sb_version_haslazysbcount(&mp->m_sb)) {
+		struct xfs_dsb	*dsb = bp->b_addr;
+
+		mp->m_sb.sb_fdblocks = be64_to_cpu(dsb->sb_fdblocks);
+	} else {
+		mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks);
+	}
 
 	xfs_sb_to_disk(bp->b_addr, &mp->m_sb);
 	xfs_trans_buf_set_type(tp, bp, XFS_BLFT_SB_BUF);
-- 
2.27.0


             reply	other threads:[~2021-04-16  9:11 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-16  9:10 Gao Xiang [this message]
2021-04-16 14:10 ` [PATCH] xfs: don't use in-core per-cpu fdblocks for !lazysbcount Carlos Maiolino
2021-04-16 20:45   ` Gao Xiang
2021-04-16 16:00 ` Darrick J. Wong
2021-04-16 21:13   ` Gao Xiang
2021-04-16 21:36     ` Gao Xiang
2021-04-17  0:19     ` Darrick J. Wong
2021-04-17  1:57       ` Dave Chinner
2021-04-17  2:20         ` Gao Xiang
2021-04-17 22:32           ` Dave Chinner
2021-04-17 23:59             ` Gao Xiang
2021-04-18 22:08               ` Dave Chinner
2021-04-19  0:38                 ` Gao Xiang
2021-04-20 17:17             ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210416091023.2143162-1-hsiangkao@redhat.com \
    --to=hsiangkao@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=zlang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.