fstests.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chandan Rajendra <chandanrlinux@gmail.com>
To: fstests@vger.kernel.org
Cc: Chandan Rajendra <chandanrlinux@gmail.com>,
	linux-xfs@vger.kernel.org, chandan@linux.ibm.com
Subject: [PATCH] common/xfs: Execute _xfs_check only for block size <= 4k
Date: Tue, 24 Mar 2020 09:17:29 +0530	[thread overview]
Message-ID: <20200324034729.32678-1-chandanrlinux@gmail.com> (raw)

fsstress when executed as part of some of the tests (e.g. generic/270)
invokes chown() syscall many times by passing random integers as value
for the uid argument. For each such syscall invocation for which there
is no on-disk quota block, xfs invokes xfs_dquot_disk_alloc() which
allocates a new block and instantiates all the quota structures mapped
by the newly allocated block. For a single 64k block, the number of
on-disk quota structures thus created will be 16 times more than that
for a 4k block.

xfs_db's check command (executed after test script finishes execution)
will read in all of the on-disk quota structures into memory. This
causes the OOM event to be triggered when reading from filesystems with
64k block size. For machines with sufficiently large amount of system
memory, this causes the test to execute for a very long time.

Due to the above stated reasons, this commit disables execution of
xfs_db's check command when working on 64k blocksized filesystem.

Signed-off-by: Chandan Rajendra <chandanrlinux@gmail.com>
---
 common/xfs | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/common/xfs b/common/xfs
index d9a9784f..d65c38d8 100644
--- a/common/xfs
+++ b/common/xfs
@@ -455,10 +455,19 @@ _check_xfs_filesystem()
 		ok=0
 	fi
 
-	# xfs_check runs out of memory on large files, so even providing the test
-	# option (-t) to avoid indexing the free space trees doesn't make it pass on
-	# large filesystems. Avoid it.
-	if [ "$LARGE_SCRATCH_DEV" != yes ]; then
+	dbsize="$($XFS_INFO_PROG "${device}" | grep data.*bsize | sed -e 's/^.*bsize=//g' -e 's/\([0-9]*\).*$/\1/g')"
+
+	# xfs_check runs out of memory,
+	# 1. On large files. So even providing the test option (-t) to
+	# avoid indexing the free space trees doesn't make it pass on
+	# large filesystems.
+	# 2. When checking filesystems with large number of quota
+	# structures. This case happens consistently with 64k blocksize when
+	# creating large number of on-disk quota structures whose quota ids
+	# are spread across a large integer range.
+	#
+	# Hence avoid it in these two cases.
+	if [ $dbsize -le 4096 -a "$LARGE_SCRATCH_DEV" != yes ]; then
 		_xfs_check $extra_log_options $device 2>&1 > $tmp.fs_check
 	fi
 	if [ -s $tmp.fs_check ]; then
-- 
2.19.1


             reply	other threads:[~2020-03-24  3:44 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-24  3:47 Chandan Rajendra [this message]
2020-03-25 13:12 ` [PATCH] common/xfs: Execute _xfs_check only for block size <= 4k Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200324034729.32678-1-chandanrlinux@gmail.com \
    --to=chandanrlinux@gmail.com \
    --cc=chandan@linux.ibm.com \
    --cc=fstests@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).