All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] xfstests: dedupe a single big file and verify integrity
@ 2018-06-01  8:07 Zorro Lang
  2018-06-01  8:07 ` [PATCH 2/3] xfstests: iterate dedupe integrity test Zorro Lang
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Zorro Lang @ 2018-06-01  8:07 UTC (permalink / raw)
  To: fstests; +Cc: linux-xfs

Duperemove is a tool for finding duplicated extents and submitting
them for deduplication, and it supports XFS. This case trys to
verify the integrity of XFS after running duperemove.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---
Hi,

The V1 should be:
https://www.spinics.net/lists/linux-xfs/msg18982.html

This time I send 3 cases at same time.

Thanks,
Zorro

 common/config        |  1 +
 tests/shared/008     | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/shared/008.out |  3 ++
 tests/shared/group   |  1 +
 4 files changed, 84 insertions(+)
 create mode 100755 tests/shared/008
 create mode 100644 tests/shared/008.out

diff --git a/common/config b/common/config
index 02c378a9..def559c1 100644
--- a/common/config
+++ b/common/config
@@ -207,6 +207,7 @@ export SQLITE3_PROG="`set_prog_path sqlite3`"
 export TIMEOUT_PROG="`set_prog_path timeout`"
 export SETCAP_PROG="`set_prog_path setcap`"
 export GETCAP_PROG="`set_prog_path getcap`"
+export DUPEREMOVE_PROG="`set_prog_path duperemove`"
 
 # use 'udevadm settle' or 'udevsettle' to wait for lv to be settled.
 # newer systems have udevadm command but older systems like RHEL5 don't.
diff --git a/tests/shared/008 b/tests/shared/008
new file mode 100755
index 00000000..74362807
--- /dev/null
+++ b/tests/shared/008
@@ -0,0 +1,79 @@
+#! /bin/bash
+# FS QA Test 008
+#
+# Dedupe a single big file and verify integrity
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/reflink
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# duperemove only supports btrfs and xfs (with reflink feature).
+# Add other filesystems if it supports more later.
+_supported_fs xfs btrfs
+_supported_os Linux
+_require_scratch_dedupe
+_require_command "$DUPEREMOVE_PROG" duperemove
+
+fssize=$((2 * 1024 * 1024 * 1024))
+_scratch_mkfs_sized $fssize > $seqres.full 2>&1
+_scratch_mount >> $seqres.full 2>&1
+
+# fill the fs with a big file has same contents
+$XFS_IO_PROG -f -c "pwrite -S 0x55 0 $fssize" $SCRATCH_MNT/${seq}.file \
+	>> $seqres.full 2>&1
+md5sum $SCRATCH_MNT/${seq}.file > $TEST_DIR/${seq}md5.sum
+
+echo "= before cycle mount ="
+# Dedupe with 1M blocksize
+$DUPEREMOVE_PROG -dr --dedupe-options=same -b 1048576 $SCRATCH_MNT/ >>$seqres.full 2>&1
+# Verify integrity
+md5sum -c --quiet $TEST_DIR/${seq}md5.sum
+# Dedupe with 64k blocksize
+$DUPEREMOVE_PROG -dr --dedupe-options=same -b 65536 $SCRATCH_MNT/ >>$seqres.full 2>&1
+# Verify integrity again
+md5sum -c --quiet $TEST_DIR/${seq}md5.sum
+
+# umount and mount again, verify pagecache contents don't mutate
+_scratch_cycle_mount
+echo "= after cycle mount ="
+md5sum -c --quiet $TEST_DIR/${seq}md5.sum
+
+status=0
+exit
diff --git a/tests/shared/008.out b/tests/shared/008.out
new file mode 100644
index 00000000..f29d478f
--- /dev/null
+++ b/tests/shared/008.out
@@ -0,0 +1,3 @@
+QA output created by 008
+= before cycle mount =
+= after cycle mount =
diff --git a/tests/shared/group b/tests/shared/group
index b3663a03..de7fe79f 100644
--- a/tests/shared/group
+++ b/tests/shared/group
@@ -10,6 +10,7 @@
 005 dangerous_fuzzers
 006 auto enospc
 007 dangerous_fuzzers
+008 auto quick dedupe
 032 mkfs auto quick
 272 auto enospc rw
 289 auto quick
-- 
2.14.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] xfstests: iterate dedupe integrity test
  2018-06-01  8:07 [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
@ 2018-06-01  8:07 ` Zorro Lang
  2018-06-07  8:23   ` Eryu Guan
  2018-06-01  8:07 ` [PATCH 3/3] xfstests: dedupe with random io race test Zorro Lang
  2018-06-07  8:13 ` [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Eryu Guan
  2 siblings, 1 reply; 6+ messages in thread
From: Zorro Lang @ 2018-06-01  8:07 UTC (permalink / raw)
  To: fstests; +Cc: linux-xfs

This case does dedupe on a dir, then copy the dir to next dir. Dedupe
the next dir again, then copy this dir to next again, and dedupe
again ... At the end, verify the data in the last dir is still same
with the first one.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---
 tests/shared/009     | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/shared/009.out |   4 ++
 tests/shared/group   |   1 +
 3 files changed, 119 insertions(+)
 create mode 100755 tests/shared/009
 create mode 100644 tests/shared/009.out

diff --git a/tests/shared/009 b/tests/shared/009
new file mode 100755
index 00000000..f1f9215f
--- /dev/null
+++ b/tests/shared/009
@@ -0,0 +1,114 @@
+#! /bin/bash
+# FS QA Test 009
+#
+# Iterate dedupe integrity test
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/reflink
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# real QA test starts here
+
+# duperemove only supports btrfs and xfs (with reflink feature).
+# Add other filesystems if it supports more later.
+_supported_fs xfs btrfs
+_supported_os Linux
+_require_scratch_dedupe
+_require_command "$DUPEREMOVE_PROG" duperemove
+
+_scratch_mkfs > $seqres.full 2>&1
+_scratch_mount >> $seqres.full 2>&1
+
+function iterate_dedup_verify()
+{
+	local src=$srcdir
+	local dest=$dupdir/1
+
+	for ((index = 1; index <= times; index++))
+	do
+		cp -a $src $dest
+		find $dest -type f -exec md5sum {} \; \
+			> $md5file$index
+		# Too many output, so only save error output
+		$DUPEREMOVE_PROG -dr --dedupe-options=same $dupdir \
+			>/dev/null 2>$seqres.full
+		md5sum -c --quiet $md5file$index
+		src=$dest
+		dest=$dupdir/$((index + 1))
+	done
+}
+
+srcdir=$SCRATCH_MNT/src
+dupdir=$SCRATCH_MNT/dup
+mkdir $srcdir $dupdir
+
+md5file=$TEST_DIR/${seq}md5.sum
+
+fsstress_opts="-w -r -f mknod=0"
+# Create some files to be original data
+$FSSTRESS_PROG $fsstress_opts -d $srcdir \
+	       -n 200 -p $((5 * LOAD_FACTOR)) >/dev/null 2>&1
+
+# Calculate how many test cycles will be run
+src_size=`du -ks $srcdir | awk '{print $1}'`
+free_size=`df -kP $SCRATCH_MNT | grep -v Filesystem | awk '{print $4}'`
+times=$((free_size / src_size))
+if [ $times -gt $((10 * TIME_FACTOR)) ]; then
+	times=$((10 * TIME_FACTOR))
+fi
+
+echo "= Do dedup and verify ="
+iterate_dedup_verify
+
+# Use the last checksum file to verify the original data
+sed -e s#dup/$times#src#g $md5file$times > $md5file
+echo "= Backwords verify ="
+md5sum -c --quiet $md5file
+
+# read from the disk also doesn't show mutations.
+_scratch_cycle_mount
+echo "= Verify after cycle mount ="
+for ((index = 1; index <= times; index++))
+do
+	md5sum -c --quiet $md5file$index
+done
+
+status=0
+exit
diff --git a/tests/shared/009.out b/tests/shared/009.out
new file mode 100644
index 00000000..44a78ba3
--- /dev/null
+++ b/tests/shared/009.out
@@ -0,0 +1,4 @@
+QA output created by 009
+= Do dedup and verify =
+= Backwords verify =
+= Verify after cycle mount =
diff --git a/tests/shared/group b/tests/shared/group
index de7fe79f..2255844b 100644
--- a/tests/shared/group
+++ b/tests/shared/group
@@ -11,6 +11,7 @@
 006 auto enospc
 007 dangerous_fuzzers
 008 auto quick dedupe
+009 auto dedupe
 032 mkfs auto quick
 272 auto enospc rw
 289 auto quick
-- 
2.14.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] xfstests: dedupe with random io race test
  2018-06-01  8:07 [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
  2018-06-01  8:07 ` [PATCH 2/3] xfstests: iterate dedupe integrity test Zorro Lang
@ 2018-06-01  8:07 ` Zorro Lang
  2018-06-07  8:39   ` Eryu Guan
  2018-06-07  8:13 ` [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Eryu Guan
  2 siblings, 1 reply; 6+ messages in thread
From: Zorro Lang @ 2018-06-01  8:07 UTC (permalink / raw)
  To: fstests; +Cc: linux-xfs

Run several duperemove processes with fsstress on same directory at
same time. Make sure the race won't break the fs or kernel.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---
 tests/shared/010     | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/shared/010.out |   2 +
 tests/shared/group   |   1 +
 3 files changed, 114 insertions(+)
 create mode 100755 tests/shared/010
 create mode 100644 tests/shared/010.out

diff --git a/tests/shared/010 b/tests/shared/010
new file mode 100755
index 00000000..b9618ee6
--- /dev/null
+++ b/tests/shared/010
@@ -0,0 +1,111 @@
+#! /bin/bash
+# FS QA Test 010
+#
+# Dedup & random I/O race test, do multi-threads fsstress and dedupe on
+# same directory/files
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+	kill_all_stress
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/reflink
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# real QA test starts here
+
+# duperemove only supports btrfs and xfs (with reflink feature).
+# Add other filesystems if it supports more later.
+_supported_fs xfs btrfs
+_supported_os Linux
+_require_scratch_dedupe
+_require_command "$DUPEREMOVE_PROG" duperemove
+_require_command "$KILLALL_PROG" killall
+
+_scratch_mkfs > $seqres.full 2>&1
+_scratch_mount >> $seqres.full 2>&1
+
+function kill_all_stress()
+{
+	local f=1
+	local d=1
+
+	# kill the bash process which loop run duperemove
+	if [ -n "$loop_dedup_pid" ]; then
+		kill $loop_dedup_pid > /dev/null 2>&1
+		wait $loop_dedup_pid > /dev/null 2>&1
+		loop_dedup_pid=""
+	fi
+
+	# Make sure all fsstress and duperemove processes get killed
+	while [ $((f + d)) -ne 0 ]; do
+		$KILLALL_PROG -q $FSSTRESS_PROG > /dev/null 2>&1
+		$KILLALL_PROG -q $DUPEREMOVE_PROG > /dev/null 2>&1
+		f=`ps -eLf | grep $FSSTRESS_PROG | grep -v "grep" | wc -l`
+		d=`ps -eLf | grep $DUPEREMOVE_PROG | grep -v "grep" | wc -l`
+		sleep 2
+	done
+}
+
+SLEEP_TIME=$((50 * TIME_FACTOR))
+
+# Start fsstress
+fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
+$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
+loop_dedup_pid=""
+# Start several dedupe processes on same directory
+for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
+	while true; do
+		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
+			>>$seqres.full 2>&1
+	done &
+	loop_dedup_pid="$! $loop_dedup_pid"
+done
+
+# End the test after $SLEEP_TIME seconds
+sleep $SLEEP_TIME
+kill_all_stress
+
+# umount and mount again, verify pagecache contents don't mutate and a fresh
+# read from the disk also doesn't show mutations.
+find $testdir -type f -exec md5sum {} \; > $TEST_DIR/${seq}md5.sum
+_scratch_cycle_mount
+md5sum -c --quiet $TEST_DIR/${seq}md5.sum
+
+echo "Silence is golden"
+status=0
+exit
diff --git a/tests/shared/010.out b/tests/shared/010.out
new file mode 100644
index 00000000..1d83a8d6
--- /dev/null
+++ b/tests/shared/010.out
@@ -0,0 +1,2 @@
+QA output created by 010
+Silence is golden
diff --git a/tests/shared/group b/tests/shared/group
index 2255844b..9ab88bac 100644
--- a/tests/shared/group
+++ b/tests/shared/group
@@ -12,6 +12,7 @@
 007 dangerous_fuzzers
 008 auto quick dedupe
 009 auto dedupe
+010 auto dedupe
 032 mkfs auto quick
 272 auto enospc rw
 289 auto quick
-- 
2.14.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/3] xfstests: dedupe a single big file and verify integrity
  2018-06-01  8:07 [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
  2018-06-01  8:07 ` [PATCH 2/3] xfstests: iterate dedupe integrity test Zorro Lang
  2018-06-01  8:07 ` [PATCH 3/3] xfstests: dedupe with random io race test Zorro Lang
@ 2018-06-07  8:13 ` Eryu Guan
  2 siblings, 0 replies; 6+ messages in thread
From: Eryu Guan @ 2018-06-07  8:13 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests, linux-xfs

On Fri, Jun 01, 2018 at 04:07:31PM +0800, Zorro Lang wrote:
> Duperemove is a tool for finding duplicated extents and submitting
> them for deduplication, and it supports XFS. This case trys to
> verify the integrity of XFS after running duperemove.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
> Hi,
> 
> The V1 should be:
> https://www.spinics.net/lists/linux-xfs/msg18982.html
> 
> This time I send 3 cases at same time.
> 
> Thanks,
> Zorro
> 
>  common/config        |  1 +
>  tests/shared/008     | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/008.out |  3 ++
>  tests/shared/group   |  1 +
>  4 files changed, 84 insertions(+)
>  create mode 100755 tests/shared/008
>  create mode 100644 tests/shared/008.out
> 
> diff --git a/common/config b/common/config
> index 02c378a9..def559c1 100644
> --- a/common/config
> +++ b/common/config
> @@ -207,6 +207,7 @@ export SQLITE3_PROG="`set_prog_path sqlite3`"
>  export TIMEOUT_PROG="`set_prog_path timeout`"
>  export SETCAP_PROG="`set_prog_path setcap`"
>  export GETCAP_PROG="`set_prog_path getcap`"
> +export DUPEREMOVE_PROG="`set_prog_path duperemove`"
>  
>  # use 'udevadm settle' or 'udevsettle' to wait for lv to be settled.
>  # newer systems have udevadm command but older systems like RHEL5 don't.
> diff --git a/tests/shared/008 b/tests/shared/008
> new file mode 100755
> index 00000000..74362807
> --- /dev/null
> +++ b/tests/shared/008
> @@ -0,0 +1,79 @@
> +#! /bin/bash
> +# FS QA Test 008
> +#
> +# Dedupe a single big file and verify integrity
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/reflink
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# duperemove only supports btrfs and xfs (with reflink feature).
> +# Add other filesystems if it supports more later.
> +_supported_fs xfs btrfs

I'm wondering if this could be moved to generic. duperemove only
supports btrfs and xfs, is that because only btrfs and xfs support
reflink, or is there some other reason in duperemove itself?

If it's the former case, it's fine to move test to generic, as
_require_scratch_dedupe serves as a guard; if it's the latter case, I
think shared test is a good match.

> +_supported_os Linux
> +_require_scratch_dedupe
> +_require_command "$DUPEREMOVE_PROG" duperemove
> +
> +fssize=$((2 * 1024 * 1024 * 1024))
> +_scratch_mkfs_sized $fssize > $seqres.full 2>&1
> +_scratch_mount >> $seqres.full 2>&1
> +
> +# fill the fs with a big file has same contents
> +$XFS_IO_PROG -f -c "pwrite -S 0x55 0 $fssize" $SCRATCH_MNT/${seq}.file \
> +	>> $seqres.full 2>&1
> +md5sum $SCRATCH_MNT/${seq}.file > $TEST_DIR/${seq}md5.sum

Dump md5sum to $tmp? e.g. $tmp.md5sum

The other two tests have similar issues.

Thanks,
Eryu

> +
> +echo "= before cycle mount ="
> +# Dedupe with 1M blocksize
> +$DUPEREMOVE_PROG -dr --dedupe-options=same -b 1048576 $SCRATCH_MNT/ >>$seqres.full 2>&1
> +# Verify integrity
> +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> +# Dedupe with 64k blocksize
> +$DUPEREMOVE_PROG -dr --dedupe-options=same -b 65536 $SCRATCH_MNT/ >>$seqres.full 2>&1
> +# Verify integrity again
> +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> +
> +# umount and mount again, verify pagecache contents don't mutate
> +_scratch_cycle_mount
> +echo "= after cycle mount ="
> +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> +
> +status=0
> +exit
> diff --git a/tests/shared/008.out b/tests/shared/008.out
> new file mode 100644
> index 00000000..f29d478f
> --- /dev/null
> +++ b/tests/shared/008.out
> @@ -0,0 +1,3 @@
> +QA output created by 008
> += before cycle mount =
> += after cycle mount =
> diff --git a/tests/shared/group b/tests/shared/group
> index b3663a03..de7fe79f 100644
> --- a/tests/shared/group
> +++ b/tests/shared/group
> @@ -10,6 +10,7 @@
>  005 dangerous_fuzzers
>  006 auto enospc
>  007 dangerous_fuzzers
> +008 auto quick dedupe
>  032 mkfs auto quick
>  272 auto enospc rw
>  289 auto quick
> -- 
> 2.14.3
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/3] xfstests: iterate dedupe integrity test
  2018-06-01  8:07 ` [PATCH 2/3] xfstests: iterate dedupe integrity test Zorro Lang
@ 2018-06-07  8:23   ` Eryu Guan
  0 siblings, 0 replies; 6+ messages in thread
From: Eryu Guan @ 2018-06-07  8:23 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests, linux-xfs

On Fri, Jun 01, 2018 at 04:07:32PM +0800, Zorro Lang wrote:
> This case does dedupe on a dir, then copy the dir to next dir. Dedupe
> the next dir again, then copy this dir to next again, and dedupe
> again ... At the end, verify the data in the last dir is still same
> with the first one.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
>  tests/shared/009     | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/009.out |   4 ++
>  tests/shared/group   |   1 +
>  3 files changed, 119 insertions(+)
>  create mode 100755 tests/shared/009
>  create mode 100644 tests/shared/009.out
> 
> diff --git a/tests/shared/009 b/tests/shared/009
> new file mode 100755
> index 00000000..f1f9215f
> --- /dev/null
> +++ b/tests/shared/009
> @@ -0,0 +1,114 @@
> +#! /bin/bash
> +# FS QA Test 009
> +#
> +# Iterate dedupe integrity test

I think this needs better test description :)

> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/reflink
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# real QA test starts here
> +
> +# duperemove only supports btrfs and xfs (with reflink feature).
> +# Add other filesystems if it supports more later.
> +_supported_fs xfs btrfs
> +_supported_os Linux
> +_require_scratch_dedupe
> +_require_command "$DUPEREMOVE_PROG" duperemove
> +
> +_scratch_mkfs > $seqres.full 2>&1
> +_scratch_mount >> $seqres.full 2>&1
> +
> +function iterate_dedup_verify()
> +{
> +	local src=$srcdir
> +	local dest=$dupdir/1
> +
> +	for ((index = 1; index <= times; index++))
> +	do

for ...; do
...
done

And I suspect that we don't get much extra test coverage by repeating
too many times, maybe reduce $times to just a few to save some test
time?

> +		cp -a $src $dest
> +		find $dest -type f -exec md5sum {} \; \
> +			> $md5file$index
> +		# Too many output, so only save error output
> +		$DUPEREMOVE_PROG -dr --dedupe-options=same $dupdir \
> +			>/dev/null 2>$seqres.full
> +		md5sum -c --quiet $md5file$index
> +		src=$dest
> +		dest=$dupdir/$((index + 1))
> +	done
> +}
> +
> +srcdir=$SCRATCH_MNT/src
> +dupdir=$SCRATCH_MNT/dup
> +mkdir $srcdir $dupdir
> +
> +md5file=$TEST_DIR/${seq}md5.sum
> +
> +fsstress_opts="-w -r -f mknod=0"

Why "-f mknod=0"? Need a comment.

> +# Create some files to be original data
> +$FSSTRESS_PROG $fsstress_opts -d $srcdir \
> +	       -n 200 -p $((5 * LOAD_FACTOR)) >/dev/null 2>&1
> +
> +# Calculate how many test cycles will be run
> +src_size=`du -ks $srcdir | awk '{print $1}'`
> +free_size=`df -kP $SCRATCH_MNT | grep -v Filesystem | awk '{print $4}'`
> +times=$((free_size / src_size))
> +if [ $times -gt $((10 * TIME_FACTOR)) ]; then
> +	times=$((10 * TIME_FACTOR))
> +fi
> +
> +echo "= Do dedup and verify ="
> +iterate_dedup_verify
> +
> +# Use the last checksum file to verify the original data
> +sed -e s#dup/$times#src#g $md5file$times > $md5file
> +echo "= Backwords verify ="
> +md5sum -c --quiet $md5file
> +
> +# read from the disk also doesn't show mutations.
> +_scratch_cycle_mount
> +echo "= Verify after cycle mount ="
> +for ((index = 1; index <= times; index++))
> +do

Same here for the "for" format.

> +	md5sum -c --quiet $md5file$index
> +done
> +
> +status=0
> +exit
> diff --git a/tests/shared/009.out b/tests/shared/009.out
> new file mode 100644
> index 00000000..44a78ba3
> --- /dev/null
> +++ b/tests/shared/009.out
> @@ -0,0 +1,4 @@
> +QA output created by 009
> += Do dedup and verify =
> += Backwords verify =
> += Verify after cycle mount =
> diff --git a/tests/shared/group b/tests/shared/group
> index de7fe79f..2255844b 100644
> --- a/tests/shared/group
> +++ b/tests/shared/group
> @@ -11,6 +11,7 @@
>  006 auto enospc
>  007 dangerous_fuzzers
>  008 auto quick dedupe
> +009 auto dedupe

All other 'dedupe' tests are in 'clone' group too, add it?

Thanks,
Eryu

>  032 mkfs auto quick
>  272 auto enospc rw
>  289 auto quick
> -- 
> 2.14.3
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 3/3] xfstests: dedupe with random io race test
  2018-06-01  8:07 ` [PATCH 3/3] xfstests: dedupe with random io race test Zorro Lang
@ 2018-06-07  8:39   ` Eryu Guan
  0 siblings, 0 replies; 6+ messages in thread
From: Eryu Guan @ 2018-06-07  8:39 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests, linux-xfs

On Fri, Jun 01, 2018 at 04:07:33PM +0800, Zorro Lang wrote:
> Run several duperemove processes with fsstress on same directory at
> same time. Make sure the race won't break the fs or kernel.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
>  tests/shared/010     | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/010.out |   2 +
>  tests/shared/group   |   1 +
>  3 files changed, 114 insertions(+)
>  create mode 100755 tests/shared/010
>  create mode 100644 tests/shared/010.out
> 
> diff --git a/tests/shared/010 b/tests/shared/010
> new file mode 100755
> index 00000000..b9618ee6
> --- /dev/null
> +++ b/tests/shared/010
> @@ -0,0 +1,111 @@
> +#! /bin/bash
> +# FS QA Test 010
> +#
> +# Dedup & random I/O race test, do multi-threads fsstress and dedupe on
> +# same directory/files
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +	kill_all_stress
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/reflink
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# real QA test starts here
> +
> +# duperemove only supports btrfs and xfs (with reflink feature).
> +# Add other filesystems if it supports more later.
> +_supported_fs xfs btrfs
> +_supported_os Linux
> +_require_scratch_dedupe
> +_require_command "$DUPEREMOVE_PROG" duperemove
> +_require_command "$KILLALL_PROG" killall
> +
> +_scratch_mkfs > $seqres.full 2>&1
> +_scratch_mount >> $seqres.full 2>&1
> +
> +function kill_all_stress()
> +{
> +	local f=1
> +	local d=1
> +
> +	# kill the bash process which loop run duperemove
> +	if [ -n "$loop_dedup_pid" ]; then
> +		kill $loop_dedup_pid > /dev/null 2>&1
> +		wait $loop_dedup_pid > /dev/null 2>&1
> +		loop_dedup_pid=""
> +	fi
> +
> +	# Make sure all fsstress and duperemove processes get killed
> +	while [ $((f + d)) -ne 0 ]; do
> +		$KILLALL_PROG -q $FSSTRESS_PROG > /dev/null 2>&1
> +		$KILLALL_PROG -q $DUPEREMOVE_PROG > /dev/null 2>&1

"sleep 1" right after killall to give the processes to exit?

> +		f=`ps -eLf | grep $FSSTRESS_PROG | grep -v "grep" | wc -l`
> +		d=`ps -eLf | grep $DUPEREMOVE_PROG | grep -v "grep" | wc -l`
> +		sleep 2

So we don't waste another 2s here if fsstress and duperemove all died in
this 2s.

> +	done
> +}
> +
> +SLEEP_TIME=$((50 * TIME_FACTOR))

sleep_time, use lower case for local variables.

> +
> +# Start fsstress
> +fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
> +$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
> +loop_dedup_pid=""
> +# Start several dedupe processes on same directory
> +for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
> +	while true; do
> +		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
> +			>>$seqres.full 2>&1
> +	done &
> +	loop_dedup_pid="$! $loop_dedup_pid"
> +done
> +
> +# End the test after $SLEEP_TIME seconds
> +sleep $SLEEP_TIME
> +kill_all_stress
> +
> +# umount and mount again, verify pagecache contents don't mutate and a fresh
> +# read from the disk also doesn't show mutations.
> +find $testdir -type f -exec md5sum {} \; > $TEST_DIR/${seq}md5.sum
> +_scratch_cycle_mount
> +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> +
> +echo "Silence is golden"
> +status=0
> +exit
> diff --git a/tests/shared/010.out b/tests/shared/010.out
> new file mode 100644
> index 00000000..1d83a8d6
> --- /dev/null
> +++ b/tests/shared/010.out
> @@ -0,0 +1,2 @@
> +QA output created by 010
> +Silence is golden
> diff --git a/tests/shared/group b/tests/shared/group
> index 2255844b..9ab88bac 100644
> --- a/tests/shared/group
> +++ b/tests/shared/group
> @@ -12,6 +12,7 @@
>  007 dangerous_fuzzers
>  008 auto quick dedupe
>  009 auto dedupe
> +010 auto dedupe

Also add 'stress' group?

Thanks,
Eryu

>  032 mkfs auto quick
>  272 auto enospc rw
>  289 auto quick
> -- 
> 2.14.3
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-06-07  8:39 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-01  8:07 [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
2018-06-01  8:07 ` [PATCH 2/3] xfstests: iterate dedupe integrity test Zorro Lang
2018-06-07  8:23   ` Eryu Guan
2018-06-01  8:07 ` [PATCH 3/3] xfstests: dedupe with random io race test Zorro Lang
2018-06-07  8:39   ` Eryu Guan
2018-06-07  8:13 ` [PATCH 1/3] xfstests: dedupe a single big file and verify integrity Eryu Guan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.