All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Behrens <sbehrens@giantdisaster.de>
To: xfs@oss.sgi.com
Cc: linux-btrfs@vger.kernel.org, jbacik@fusionio.com
Subject: [PATCH] xfstests: btrfs/011 improvement for compressed filesystems
Date: Fri, 13 Sep 2013 12:27:21 +0200	[thread overview]
Message-ID: <1379068041-4299-1-git-send-email-sbehrens@giantdisaster.de> (raw)

Josef noticed that using /dev/zero to generate most of the test
data doesn't work if someone overides the mount options to
enable compression. The test that performs a cancelation failed
because the replace operation was already finished when the
cancel request was executed.

Since /dev/urandom is too slow to generate multiple GB, the
way how the filesystem data is generated is completely changed
with this patch. Now /dev/urandom is used to generate one 1MB
file and this file is copied up to 2048 times. /dev/zero is no
longer used.

The runtime of the test is about the same as before. Compression
works now, online deduplication will again cause issues, but
we don't have online deduplicatin today.

Reported-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
---
 tests/btrfs/011 | 46 ++++++++++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 16 deletions(-)

diff --git a/tests/btrfs/011 b/tests/btrfs/011
index c8b4aac..71ff3de 100755
--- a/tests/btrfs/011
+++ b/tests/btrfs/011
@@ -78,6 +78,7 @@ workout()
 	local quick="$4"
 	local source_dev="`echo ${SCRATCH_DEV_POOL} | awk '{print $1}'`"
 	local target_dev="`echo ${SCRATCH_DEV_POOL} | awk '{print $NF}'`"
+	local fssize
 
 	if [ "`echo $SCRATCH_DEV_POOL | wc -w`" -lt `expr $num_devs4raid + 1` ]; then
 		echo "Skip workout $1 $2 $3 $4" >> $seqres.full
@@ -107,33 +108,46 @@ workout()
 		_notrun "Different device sizes detected"
 	fi
 
+	if [ `$BTRFS_SHOW_SUPER_PROG $SCRATCH_DEV | grep dev_item.total_bytes | awk '{print $2}'` -lt 2500000000 ]; then
+		_notrun "device size too small"
+	fi
+
 	_scratch_mount
 
-	# Generate 500 times 20K extents in the data chunk and fill up
-	# metadata with inline extents. Ignore ENOSPC.
+	# Generate metadata and some minimal user data, generate 500 times
+	# 20K extents in the data chunk and fill up metadata with inline
+	# extents.
 	for i in `seq 1 500`; do
 		dd if=/dev/urandom of=$SCRATCH_MNT/l$i bs=16385 count=1
 		dd if=/dev/urandom of=$SCRATCH_MNT/s$i bs=3800 count=1
 	done > /dev/null 2>&1
 
+	# /dev/urandom is slow but has the benefit that the generated
+	# contents does not shrink during compression.
+	# Generate a template once and quickly copy it multiple times.
+	# Obviously with online deduplication this will not work anymore.
+	dd if=/dev/urandom of=$SCRATCH_MNT/t0 bs=1M count=1 > /dev/null 2>&1
+
 	if [ "${quick}Q" = "thoroughQ" ]; then
 		# The intention of this "thorough" test is to increase
 		# the probability of random errors, in particular in
 		# conjunction with the background noise generator and
-		# a sync call while the replace operation in ongoing.
-		# Unfortunately it takes quite some time to generate
-		# the test filesystem, therefore most data consists out
-		# of zeros although this data is not very useful for
-		# detecting misplaced read/write requests.
-		# Ignore ENOSPC, it's not a problem..
-		dd if=/dev/urandom of=$SCRATCH_MNT/r bs=1M count=200 >> $seqres.full 2>&1 &
-		dd if=/dev/zero of=$SCRATCH_MNT/0 bs=1M count=2000 >> $seqres.full 2>&1
-		wait
+		# a sync call while the replace operation is ongoing.
+		fssize=2048
 	elif [ "${with_cancel}Q" = "cancelQ" ]; then
-		# produce some data to prevent that the replace operation
-		# finishes before the cancel request is started
-		dd if=/dev/zero of=$SCRATCH_MNT/0 bs=1M count=1000 >> $seqres.full 2>&1
+		# The goal is to produce enough data to prevent that the
+		# replace operation finishes before the cancel request
+		# is started.
+		fssize=1024
+	else
+		fssize=64
 	fi
+
+	# since the available size was tested before, do not tolerate
+	# any failures
+	for i in `seq $fssize`; do
+		cp $SCRATCH_MNT/t0 $SCRATCH_MNT/t$i || _fail "cp failed"
+	done > /dev/null 2>> $seqres.full
 	sync; sync
 
 	btrfs_replace_test $source_dev $target_dev "" $with_cancel $quick
@@ -214,7 +228,7 @@ btrfs_replace_test()
 		# before the status is printed
 		$BTRFS_UTIL_PROG replace status $SCRATCH_MNT > $tmp.tmp 2>&1
 		cat $tmp.tmp >> $seqres.full
-		grep -q canceled $tmp.tmp || _fail "btrfs replace status failed"
+		grep -q canceled $tmp.tmp || _fail "btrfs replace status (canceled) failed"
 	else
 		if [ "${quick}Q" = "thoroughQ" ]; then
 			# On current hardware, the thorough test runs
@@ -226,7 +240,7 @@ btrfs_replace_test()
 
 		$BTRFS_UTIL_PROG replace status $SCRATCH_MNT > $tmp.tmp 2>&1
 		cat $tmp.tmp >> $seqres.full
-		grep -q finished $tmp.tmp || _fail "btrfs replace status failed"
+		grep -q finished $tmp.tmp || _fail "btrfs replace status (finished) failed"
 	fi
 
 	if ps -p $noise_pid | grep -q $noise_pid; then
-- 
1.8.4


WARNING: multiple messages have this Message-ID (diff)
From: Stefan Behrens <sbehrens@giantdisaster.de>
To: xfs@oss.sgi.com
Cc: jbacik@fusionio.com, linux-btrfs@vger.kernel.org
Subject: [PATCH] xfstests: btrfs/011 improvement for compressed filesystems
Date: Fri, 13 Sep 2013 12:27:21 +0200	[thread overview]
Message-ID: <1379068041-4299-1-git-send-email-sbehrens@giantdisaster.de> (raw)

Josef noticed that using /dev/zero to generate most of the test
data doesn't work if someone overides the mount options to
enable compression. The test that performs a cancelation failed
because the replace operation was already finished when the
cancel request was executed.

Since /dev/urandom is too slow to generate multiple GB, the
way how the filesystem data is generated is completely changed
with this patch. Now /dev/urandom is used to generate one 1MB
file and this file is copied up to 2048 times. /dev/zero is no
longer used.

The runtime of the test is about the same as before. Compression
works now, online deduplication will again cause issues, but
we don't have online deduplicatin today.

Reported-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
---
 tests/btrfs/011 | 46 ++++++++++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 16 deletions(-)

diff --git a/tests/btrfs/011 b/tests/btrfs/011
index c8b4aac..71ff3de 100755
--- a/tests/btrfs/011
+++ b/tests/btrfs/011
@@ -78,6 +78,7 @@ workout()
 	local quick="$4"
 	local source_dev="`echo ${SCRATCH_DEV_POOL} | awk '{print $1}'`"
 	local target_dev="`echo ${SCRATCH_DEV_POOL} | awk '{print $NF}'`"
+	local fssize
 
 	if [ "`echo $SCRATCH_DEV_POOL | wc -w`" -lt `expr $num_devs4raid + 1` ]; then
 		echo "Skip workout $1 $2 $3 $4" >> $seqres.full
@@ -107,33 +108,46 @@ workout()
 		_notrun "Different device sizes detected"
 	fi
 
+	if [ `$BTRFS_SHOW_SUPER_PROG $SCRATCH_DEV | grep dev_item.total_bytes | awk '{print $2}'` -lt 2500000000 ]; then
+		_notrun "device size too small"
+	fi
+
 	_scratch_mount
 
-	# Generate 500 times 20K extents in the data chunk and fill up
-	# metadata with inline extents. Ignore ENOSPC.
+	# Generate metadata and some minimal user data, generate 500 times
+	# 20K extents in the data chunk and fill up metadata with inline
+	# extents.
 	for i in `seq 1 500`; do
 		dd if=/dev/urandom of=$SCRATCH_MNT/l$i bs=16385 count=1
 		dd if=/dev/urandom of=$SCRATCH_MNT/s$i bs=3800 count=1
 	done > /dev/null 2>&1
 
+	# /dev/urandom is slow but has the benefit that the generated
+	# contents does not shrink during compression.
+	# Generate a template once and quickly copy it multiple times.
+	# Obviously with online deduplication this will not work anymore.
+	dd if=/dev/urandom of=$SCRATCH_MNT/t0 bs=1M count=1 > /dev/null 2>&1
+
 	if [ "${quick}Q" = "thoroughQ" ]; then
 		# The intention of this "thorough" test is to increase
 		# the probability of random errors, in particular in
 		# conjunction with the background noise generator and
-		# a sync call while the replace operation in ongoing.
-		# Unfortunately it takes quite some time to generate
-		# the test filesystem, therefore most data consists out
-		# of zeros although this data is not very useful for
-		# detecting misplaced read/write requests.
-		# Ignore ENOSPC, it's not a problem..
-		dd if=/dev/urandom of=$SCRATCH_MNT/r bs=1M count=200 >> $seqres.full 2>&1 &
-		dd if=/dev/zero of=$SCRATCH_MNT/0 bs=1M count=2000 >> $seqres.full 2>&1
-		wait
+		# a sync call while the replace operation is ongoing.
+		fssize=2048
 	elif [ "${with_cancel}Q" = "cancelQ" ]; then
-		# produce some data to prevent that the replace operation
-		# finishes before the cancel request is started
-		dd if=/dev/zero of=$SCRATCH_MNT/0 bs=1M count=1000 >> $seqres.full 2>&1
+		# The goal is to produce enough data to prevent that the
+		# replace operation finishes before the cancel request
+		# is started.
+		fssize=1024
+	else
+		fssize=64
 	fi
+
+	# since the available size was tested before, do not tolerate
+	# any failures
+	for i in `seq $fssize`; do
+		cp $SCRATCH_MNT/t0 $SCRATCH_MNT/t$i || _fail "cp failed"
+	done > /dev/null 2>> $seqres.full
 	sync; sync
 
 	btrfs_replace_test $source_dev $target_dev "" $with_cancel $quick
@@ -214,7 +228,7 @@ btrfs_replace_test()
 		# before the status is printed
 		$BTRFS_UTIL_PROG replace status $SCRATCH_MNT > $tmp.tmp 2>&1
 		cat $tmp.tmp >> $seqres.full
-		grep -q canceled $tmp.tmp || _fail "btrfs replace status failed"
+		grep -q canceled $tmp.tmp || _fail "btrfs replace status (canceled) failed"
 	else
 		if [ "${quick}Q" = "thoroughQ" ]; then
 			# On current hardware, the thorough test runs
@@ -226,7 +240,7 @@ btrfs_replace_test()
 
 		$BTRFS_UTIL_PROG replace status $SCRATCH_MNT > $tmp.tmp 2>&1
 		cat $tmp.tmp >> $seqres.full
-		grep -q finished $tmp.tmp || _fail "btrfs replace status failed"
+		grep -q finished $tmp.tmp || _fail "btrfs replace status (finished) failed"
 	fi
 
 	if ps -p $noise_pid | grep -q $noise_pid; then
-- 
1.8.4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2013-09-13 10:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-13 10:27 Stefan Behrens [this message]
2013-09-13 10:27 ` [PATCH] xfstests: btrfs/011 improvement for compressed filesystems Stefan Behrens
2013-09-27 16:34 ` Jan Schmidt
2013-09-27 16:34   ` Jan Schmidt
2013-10-14 14:12 ` Rich Johnston
2013-10-14 14:12   ` Rich Johnston

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1379068041-4299-1-git-send-email-sbehrens@giantdisaster.de \
    --to=sbehrens@giantdisaster.de \
    --cc=jbacik@fusionio.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.