fstests.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Gao Xiang <hsiangkao@redhat.com>
Cc: linux-xfs@vger.kernel.org, fstests@vger.kernel.org
Subject: Re: [WIP] [RFC PATCH] xfs: add test on shrinking unused space in the last AG
Date: Tue, 9 Mar 2021 10:03:49 -0800	[thread overview]
Message-ID: <20210309180349.GC7269@magnolia> (raw)
In-Reply-To: <20201028230909.639698-1-hsiangkao@redhat.com>

On Thu, Oct 29, 2020 at 07:09:09AM +0800, Gao Xiang wrote:
> This adds a testcase to test shrinking unused space as much
> as possible in the last AG with background fsstress workload.
> 
> The expectation is that no crash happens with expected output.
> 
> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
> ---
>  tests/xfs/522     | 125 ++++++++++++++++++++++++++++++++++++++++++++++
>  tests/xfs/522.out |  73 +++++++++++++++++++++++++++
>  tests/xfs/group   |   1 +
>  3 files changed, 199 insertions(+)
>  create mode 100755 tests/xfs/522
>  create mode 100644 tests/xfs/522.out
> 
> diff --git a/tests/xfs/522 b/tests/xfs/522
> new file mode 100755
> index 00000000..e427a33a
> --- /dev/null
> +++ b/tests/xfs/522
> @@ -0,0 +1,125 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) 2020 Red Hat, Inc.  All Rights Reserved.
> +#
> +# FS QA Test 522
> +#
> +# XFS online shrinkfs-while-allocating tests
> +#
> +# This test attempts to shrink unused space as much as possible with
> +# background fsstress workload. It will decrease the shrink size if
> +# larger size fails. And totally repeat 6 times.
> +#
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "rm -f $tmp.*; exit \$status" 0 1 2 3 15
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +
> +_create_scratch()

Functions in tests do not need to be prefixed with "_" since they're not
global symbols.

> +{
> +	echo "*** mkfs"
> +	_scratch_mkfs_xfs $@ | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
> +	. $tmp.mkfs
> +
> +	echo "*** mount"
> +	if ! _try_scratch_mount 2>/dev/null
> +	then
> +		echo "failed to mount $SCRATCH_DEV"
> +		exit 1
> +	fi
> +
> +	# fix the reserve block pool to a known size so that the enospc
> +	# calculations work out correctly.
> +	_scratch_resvblks 1024 >  /dev/null 2>&1
> +}
> +
> +_fill_scratch()
> +{
> +	$XFS_IO_PROG -f -c "resvsp 0 ${1}" $SCRATCH_MNT/resvfile
> +}
> +
> +_stress_scratch()
> +{
> +	procs=3
> +	nops=1000
> +	# -w ensures that the only ops are ones which cause write I/O
> +	FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> +	    -n $nops $FSSTRESS_AVOID`
> +	$FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> +}
> +
> +# real QA test starts here
> +_supported_fs xfs
> +_require_scratch
> +_require_xfs_io_command "falloc"
> +
> +rm -f $seqres.full
> +_scratch_mkfs_xfs | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
> +. $tmp.mkfs	# extract blocksize and data size for scratch device
> +
> +endsize=`expr 125 \* 1048576`	# stop after shrinking this big
> +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> +
> +nags=2
> +totalcount=6
> +
> +while [ $totalcount -gt 0 ]; do

So we run this six times?  Why six, specifically?  Should it be scaled
by TIME_FACTOR?

> +	size=`expr 1010 \* 1048576`	# 1010 megabytes initially
> +	echo "*** creating scratch filesystem"
> +	logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
> +
> +	_create_scratch -lsize=${logblks}b -dsize=${size} -dagcount=${nags}
> +
> +	echo "*** using some initial space on scratch filesystem"
> +	for i in `seq 125 -1 90`; do
> +		fillsize=`expr $i \* 1048576`
> +		out="$(_fill_scratch $fillsize 2>&1)"
> +		echo "$out" | grep -q 'No space left on device' && continue
> +		test -n "${out}" && echo "$out"
> +		break
> +	done

_fill_fs ?

> +
> +	decsize=`expr  42 \* 1048576`	# shrink in chunks of this size at most

This could go outside the loop.

> +	echo "*** stressing filesystem"
> +	while [ $size -gt $endsize ]; do
> +		_stress_scratch
> +		sleep 1
> +
> +		decb=`expr $decsize / $dbsize`    # in data blocks
> +		while [ $decb -gt 0 ]; do
> +			sizeb=`expr $size / $dbsize - $decb`
> +
> +			xfs_growfs -D ${sizeb} $SCRATCH_MNT 2>&1 \

Use $XFS_GROWFS_PROG, do not call xfs_growfs directly.

> +				| tee -a $seqres.full | _filter_mkfs 2>$tmp.growfs > /dev/null
> +
> +			ret="${PIPESTATUS[0]}"
> +			. $tmp.growfs
> +
> +			[ $ret -eq 0 ] && break
> +
> +			[ $decb -gt 100 ] && decb=`expr $decb + $RANDOM % 10`
> +			decb=`expr $decb / 2`

So... uh... what does this decb logic do?  AFAICT we start by removing
42MB from the end of the filesystem, and then we ... add some random
quantity to decb, halve it, then shrink by that amount?  And we keep
doing smaller and smaller pieces until the shrink fails or decb becomes
zero...

> +		done
> +
> +		wait
> +		[ $decb -eq 0 ] && break

...after which we wait for fsstress to end and then loop back to
fsstress and shrinking?

I was expecting to see two tests: a basic functionality test, and then a
second one to race fsstress and shrink to see what happens.

> +
> +		size=`expr $size - $decb \* $dbsize`

Why don't we query the size of the filesystem instead of calculating it
and hoping that reflects reality?

--D

> +	done
> +
> +	_scratch_unmount
> +	_repair_scratch_fs >> $seqres.full
> +	totalcount=`expr $totalcount - 1`
> +done
> +
> +status=0
> +exit
> diff --git a/tests/xfs/522.out b/tests/xfs/522.out
> new file mode 100644
> index 00000000..03d512f5
> --- /dev/null
> +++ b/tests/xfs/522.out
> @@ -0,0 +1,73 @@
> +QA output created by 522
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** creating scratch filesystem
> +*** mkfs
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** mount
> +*** using some initial space on scratch filesystem
> +*** stressing filesystem
> +*** creating scratch filesystem
> +*** mkfs
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** mount
> +*** using some initial space on scratch filesystem
> +*** stressing filesystem
> +*** creating scratch filesystem
> +*** mkfs
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** mount
> +*** using some initial space on scratch filesystem
> +*** stressing filesystem
> +*** creating scratch filesystem
> +*** mkfs
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** mount
> +*** using some initial space on scratch filesystem
> +*** stressing filesystem
> +*** creating scratch filesystem
> +*** mkfs
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** mount
> +*** using some initial space on scratch filesystem
> +*** stressing filesystem
> +*** creating scratch filesystem
> +*** mkfs
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data     = bsize=XXX blocks=XXX, imaxpct=PCT
> +         = sunit=XXX swidth=XXX, unwritten=X
> +naming   =VERN bsize=XXX
> +log      =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> +*** mount
> +*** using some initial space on scratch filesystem
> +*** stressing filesystem
> diff --git a/tests/xfs/group b/tests/xfs/group
> index b89c0a4e..ab762ed6 100644
> --- a/tests/xfs/group
> +++ b/tests/xfs/group
> @@ -519,3 +519,4 @@
>  519 auto quick reflink
>  520 auto quick reflink
>  521 auto quick realtime growfs
> +522 auto quick growfs
> -- 
> 2.18.1
> 

  reply	other threads:[~2021-03-09 18:04 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-28 23:09 [WIP] [RFC PATCH] xfs: add test on shrinking unused space in the last AG Gao Xiang
2021-03-09 18:03 ` Darrick J. Wong [this message]
2021-03-10  2:07   ` Gao Xiang
2021-03-10  2:19     ` Darrick J. Wong
2021-03-10  0:43 ` Zorro Lang
2021-03-10  2:09   ` Darrick J. Wong
2021-03-10  2:33     ` Gao Xiang
2021-03-10  2:16   ` Gao Xiang
  -- strict thread matches above, loose matches on Subject: below --
2020-10-28 11:30 Gao Xiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210309180349.GC7269@magnolia \
    --to=djwong@kernel.org \
    --cc=fstests@vger.kernel.org \
    --cc=hsiangkao@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).