All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] fstests: fix call sites that used xfs_io directly
@ 2016-10-14 20:43 Amir Goldstein
  2016-10-14 20:43 ` [PATCH 2/2] fstests: run xfs_io as multi threaded process Amir Goldstein
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Goldstein @ 2016-10-14 20:43 UTC (permalink / raw)
  To: Dave Chinner, eguan; +Cc: fstests

Convert those few remaining call sites to use the XFS_IO_PROG env var.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
---
 common/quota      | 4 ++--
 tests/generic/043 | 2 +-
 tests/generic/044 | 4 ++--
 tests/generic/045 | 4 ++--
 tests/generic/046 | 4 ++--
 tests/generic/047 | 2 +-
 tests/generic/048 | 2 +-
 tests/generic/049 | 2 +-
 tests/generic/224 | 4 ++--
 tests/xfs/109     | 4 ++--
 tests/xfs/114     | 4 ++--
 tests/xfs/190     | 4 ++--
 tests/xfs/201     | 2 +-
 tests/xfs/229     | 2 +-
 tests/xfs/250     | 2 +-
 tests/xfs/291     | 2 +-
 16 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/common/quota b/common/quota
index fb2b2a0..678bc43 100644
--- a/common/quota
+++ b/common/quota
@@ -139,7 +139,7 @@ _file_as_id()
 
     parent=`dirname $1`
     if [ $3 = p ]; then
-	echo PARENT: xfs_io -r -c "chproj $2" -c "chattr +P" $parent >>$seqres.full
+	echo PARENT: $XFS_IO_PROG -r -c "chproj $2" -c "chattr +P" $parent >>$seqres.full
 	$XFS_IO_PROG -r -c "chproj $2" -c "chattr +P" $parent >>$seqres.full 2>&1
 	magik='$>'	# (irrelevent, above set projid-inherit-on-parent)
     elif [ $3 = u ]; then
@@ -165,7 +165,7 @@ EOF
 #	exec "dd if=/dev/zero of=$1 bs=$4 count=$5 >>$seqres.full 2>&1";
 
     if [ $3 = p ]; then
-	echo PARENT: xfs_io -r -c "chproj 0" -c "chattr -P" $parent >>$seqres.full
+	echo PARENT: $XFS_IO_PROG -r -c "chproj 0" -c "chattr -P" $parent >>$seqres.full
 	$XFS_IO_PROG -r -c "chproj 0" -c "chattr -P" $parent >>$seqres.full 2>&1
     fi
 }
diff --git a/tests/generic/043 b/tests/generic/043
index bd8eef8..b76a5aa 100755
--- a/tests/generic/043
+++ b/tests/generic/043
@@ -50,7 +50,7 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 64k -S 0xff 0 64k" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 64k -S 0xff 0 64k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
diff --git a/tests/generic/044 b/tests/generic/044
index f46e828..0331baa 100755
--- a/tests/generic/044
+++ b/tests/generic/044
@@ -50,13 +50,13 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 64k -S 0xff 0 64k" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 64k -S 0xff 0 64k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
 		exit
 	fi
-	xfs_io -c "truncate 64k" $file > /dev/null
+	$XFS_IO_PROG -c "truncate 64k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error truncating file $file
diff --git a/tests/generic/045 b/tests/generic/045
index 4ec7650..874c955 100755
--- a/tests/generic/045
+++ b/tests/generic/045
@@ -50,13 +50,13 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 64k -S 0xff 0 64k" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 64k -S 0xff 0 64k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
 		exit
 	fi
-	xfs_io -c "truncate 32k" $file > /dev/null
+	$XFS_IO_PROG -c "truncate 32k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error truncating file $file
diff --git a/tests/generic/046 b/tests/generic/046
index 08f1137..5a894b8 100755
--- a/tests/generic/046
+++ b/tests/generic/046
@@ -50,13 +50,13 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 32k -S 0xff 0 32k" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 32k -S 0xff 0 32k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
 		exit
 	fi
-	xfs_io -c "truncate 64k" $file > /dev/null
+	$XFS_IO_PROG -c "truncate 64k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error truncating file $file
diff --git a/tests/generic/047 b/tests/generic/047
index b894ee6..631dc1e 100755
--- a/tests/generic/047
+++ b/tests/generic/047
@@ -81,7 +81,7 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 32k -S 0xff 0 32k" -c "fsync" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 32k -S 0xff 0 32k" -c "fsync" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
diff --git a/tests/generic/048 b/tests/generic/048
index 6f5f444..51d7efd 100755
--- a/tests/generic/048
+++ b/tests/generic/048
@@ -84,7 +84,7 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 64k -S 0xff 0 10m" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 64k -S 0xff 0 10m" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
diff --git a/tests/generic/049 b/tests/generic/049
index 320318e..1299242 100755
--- a/tests/generic/049
+++ b/tests/generic/049
@@ -81,7 +81,7 @@ i=1;
 while [ $i -lt 1000 ]
 do
 	file=$SCRATCH_MNT/$i
-	xfs_io -f -c "pwrite -b 32k -S 0xff 0 32k" $file > /dev/null
+	$XFS_IO_PROG -f -c "pwrite -b 32k -S 0xff 0 32k" $file > /dev/null
 	if [ $? -ne 0 ]
 	then
 		echo error creating/writing file $file
diff --git a/tests/generic/224 b/tests/generic/224
index 391d877..2c30a75 100755
--- a/tests/generic/224
+++ b/tests/generic/224
@@ -59,7 +59,7 @@ _scratch_mount >> $seqres.full 2>&1
 
 # set the reserved block pool to almost empty for XFS
 if [ "$FSTYP" = "xfs" ]; then
-	xfs_io -x -c "resblks 4" $SCRATCH_MNT >> $seqres.full 2>&1
+	$XFS_IO_PROG -x -c "resblks 4" $SCRATCH_MNT >> $seqres.full 2>&1
 fi
 
 FILES=1000
@@ -71,7 +71,7 @@ for i in `seq 0 1 $FILES`; do
 	# tripped over.
         (
 		sleep 5
-		xfs_io -f -c "truncate 10485760" $SCRATCH_MNT/testfile.$i
+		$XFS_IO_PROG -f -c "truncate 10485760" $SCRATCH_MNT/testfile.$i
 		dd if=/dev/zero of=$SCRATCH_MNT/testfile.$i bs=4k conv=notrunc
 	) > /dev/null 2>&1 &
 done
diff --git a/tests/xfs/109 b/tests/xfs/109
index ac20619..e0fdec3 100755
--- a/tests/xfs/109
+++ b/tests/xfs/109
@@ -50,7 +50,7 @@ populate()
 	i=0
 	while [ $i -le $files -a "X$faststart" = "X" ]; do
 		file=$SCRATCH_MNT/f$i
-		xfs_io -f -d -c 'pwrite -b 64k 0 64k' $file >/dev/null
+		$XFS_IO_PROG -f -d -c 'pwrite -b 64k 0 64k' $file >/dev/null
 		let i=$i+1
 	done
 
@@ -77,7 +77,7 @@ allocate()
 		{
 			j=0
 			while [ $j -lt 100 ]; do
-				xfs_io -f -c 'pwrite -b 64k 0 16m' $file \
+				$XFS_IO_PROG -f -c 'pwrite -b 64k 0 16m' $file \
 					>/dev/null 2>&1
 				rm $file
 				let j=$j+1
diff --git a/tests/xfs/114 b/tests/xfs/114
index 50cc71b..24474f7 100755
--- a/tests/xfs/114
+++ b/tests/xfs/114
@@ -44,7 +44,7 @@ _check_paths()
 	sync; sleep 1
 	echo ""
 	echo "Check parent"
-	if ! xfs_io -x -c 'parent -c' $SCRATCH_MNT | _filter_num; then
+	if ! $XFS_IO_PROG -x -c 'parent -c' $SCRATCH_MNT | _filter_num; then
 		exit 1
 	fi
 }
@@ -55,7 +55,7 @@ _print_names()
 	echo "Print out hardlink names for given path, $1"
 	echo ""
 
-	xfs_io -x -c parent $1 | awk '/p_ino.*=/ {$3 = "inodeXXX"; print; next} {print}' 
+	$XFS_IO_PROG -x -c parent $1 | awk '/p_ino.*=/ {$3 = "inodeXXX"; print; next} {print}'
 }
 
 _test_create()
diff --git a/tests/xfs/190 b/tests/xfs/190
index 614a80c..d688216 100755
--- a/tests/xfs/190
+++ b/tests/xfs/190
@@ -62,8 +62,8 @@ dd if=/dev/zero of=$SCRATCH_MNT/$filename bs=1024k count=10 >> $seqres.full 2>&1
 echo Punching holes in file
 echo Punching holes in file >> $seqres.full
 for i in $holes ; do
-	echo xfs_io -c "unresvsp `echo $i |$SED_PROG 's/:/ /g'`" $SCRATCH_MNT/$filename >> $seqres.full
-	xfs_io -c "unresvsp `echo $i |$SED_PROG 's/:/ /g'`" $SCRATCH_MNT/$filename ;
+	echo $XFS_IO_PROG -c "unresvsp `echo $i |$SED_PROG 's/:/ /g'`" $SCRATCH_MNT/$filename >> $seqres.full
+	$XFS_IO_PROG -c "unresvsp `echo $i |$SED_PROG 's/:/ /g'`" $SCRATCH_MNT/$filename ;
 done
 
 echo Verifying holes are in the correct spots:
diff --git a/tests/xfs/201 b/tests/xfs/201
index ac8abf4..45dc42f 100755
--- a/tests/xfs/201
+++ b/tests/xfs/201
@@ -56,7 +56,7 @@ do_pwrite()
 	end=`expr $2 \* $min_align`
 	length=`expr $end - $offset`
 
-	xfs_io -d -f $file -c "pwrite $offset $length" >/dev/null
+	$XFS_IO_PROG -d -f $file -c "pwrite $offset $length" >/dev/null
 }
 
 _require_scratch
diff --git a/tests/xfs/229 b/tests/xfs/229
index b8fd914..0a42bcf 100755
--- a/tests/xfs/229
+++ b/tests/xfs/229
@@ -60,7 +60,7 @@ EXTSIZE="256k"
 mkdir ${TDIR}
 
 # Set the test directory extsize
-xfs_io -c "extsize ${EXTSIZE}" ${TDIR}
+$XFS_IO_PROG -c "extsize ${EXTSIZE}" ${TDIR}
 
 # Create a set of holey files
 echo "generating ${NFILES} files"
diff --git a/tests/xfs/250 b/tests/xfs/250
index 0cdc382..f807c5a 100755
--- a/tests/xfs/250
+++ b/tests/xfs/250
@@ -81,7 +81,7 @@ _test_loop()
 	mount -t xfs -o loop $LOOP_DEV $LOOP_MNT
 
 	echo "*** preallocate large file"
-	xfs_io -f -c "resvsp 0 $fsize" $LOOP_MNT/foo | _filter_io
+	$XFS_IO_PROG -f -c "resvsp 0 $fsize" $LOOP_MNT/foo | _filter_io
 
 	echo "*** unmount loop filesystem"
 	umount $LOOP_MNT > /dev/null 2>&1
diff --git a/tests/xfs/291 b/tests/xfs/291
index 808f333..c1cbe32 100755
--- a/tests/xfs/291
+++ b/tests/xfs/291
@@ -69,7 +69,7 @@ done
 sync
 
 # Soak up any remaining freespace
-xfs_io -f -c "pwrite 0 16m" -c "fsync" $SCRATCH_MNT/space_file.large >> $seqres.full 2>&1
+$XFS_IO_PROG -f -c "pwrite 0 16m" -c "fsync" $SCRATCH_MNT/space_file.large >> $seqres.full 2>&1
 
 # Take a look at freespace for any post-mortem on the test
 _scratch_unmount
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-14 20:43 [PATCH 1/2] fstests: fix call sites that used xfs_io directly Amir Goldstein
@ 2016-10-14 20:43 ` Amir Goldstein
  2016-10-15  9:11   ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Goldstein @ 2016-10-14 20:43 UTC (permalink / raw)
  To: Dave Chinner, eguan; +Cc: fstests

Try to run xfs_io in all tests with command line option -M
which starts an idle thread before performing any io.

The purpose of this idle thread is to test io from a multi threaded
process. With single threaded process, the file table is not shared
and file structs are not reference counted.

So in order to improve the change of detecting file struct reference
leaks, all xfs_io commands in tests will try to run with this option.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
---
 common/rc | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/common/rc b/common/rc
index c3da064..64bf341 100644
--- a/common/rc
+++ b/common/rc
@@ -3799,6 +3799,10 @@ init_rc()
 	xfs_io -c stat $TEST_DIR 2>&1 | grep -q "is not on an XFS filesystem" && \
 	export XFS_IO_PROG="$XFS_IO_PROG -F"
 
+	# Figure out if we can add -M (run as multi threaded) option to xfs_io
+	$XFS_IO_PROG -M -c quit 2>&1 | grep -q "invalid option" || \
+	export XFS_IO_PROG="$XFS_IO_PROG -M"
+
 	# xfs_copy doesn't work on v5 xfs yet without -d option
 	if [ "$FSTYP" == "xfs" ] && [[ $MKFS_OPTIONS =~ crc=1 ]]; then
 		export XFS_COPY_PROG="$XFS_COPY_PROG -d"
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-14 20:43 ` [PATCH 2/2] fstests: run xfs_io as multi threaded process Amir Goldstein
@ 2016-10-15  9:11   ` Christoph Hellwig
  2016-10-15 15:13     ` Amir Goldstein
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2016-10-15  9:11 UTC (permalink / raw)
  To: Amir Goldstein; +Cc: Dave Chinner, eguan, fstests

On Fri, Oct 14, 2016 at 11:43:30PM +0300, Amir Goldstein wrote:
> Try to run xfs_io in all tests with command line option -M
> which starts an idle thread before performing any io.
> 
> The purpose of this idle thread is to test io from a multi threaded
> process. With single threaded process, the file table is not shared
> and file structs are not reference counted.
> 
> So in order to improve the change of detecting file struct reference
> leaks, all xfs_io commands in tests will try to run with this option.

I like the idea behing the -M command, but I'm not sure if we should
always use it.  For one this means we won't test the fget fastpath
any more, and second I'd like to know what the impact on xfstests
runtime is.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-15  9:11   ` Christoph Hellwig
@ 2016-10-15 15:13     ` Amir Goldstein
  2016-10-15 17:04       ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Goldstein @ 2016-10-15 15:13 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Dave Chinner, eguan, fstests

On Sat, Oct 15, 2016 at 12:11 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Fri, Oct 14, 2016 at 11:43:30PM +0300, Amir Goldstein wrote:
>> Try to run xfs_io in all tests with command line option -M
>> which starts an idle thread before performing any io.
>>
>> The purpose of this idle thread is to test io from a multi threaded
>> process. With single threaded process, the file table is not shared
>> and file structs are not reference counted.
>>
>> So in order to improve the change of detecting file struct reference
>> leaks, all xfs_io commands in tests will try to run with this option.
>
> I like the idea behing the -M command, but I'm not sure if we should
> always use it.  For one this means we won't test the fget fastpath
> any more,

Indeed, I gave this some thought and decided to post as use always
and discuss the other options here.
Random use of the flag - inconsistent results - I don't like it.
Optional use of the flag - doubles the test matrix and rarely people
will use the
non default.
Consistent pseudo random - say only for odd test numbers, unless test specifies
explicitly use of mutli/single threaded xfs_io - I don't like it as well.
Make the flag default according to some half-related kernel config option
(say XFS_DEBUG?). it stincks a bit, but at least it's got the advantage of
large group of people running xfstests with and without it.

Please cast your votes and suggest better options if you have any.

> and second I'd like to know what the impact on xfstests
> runtime is.

On which tests or setups would you expect that this change would make most
difference?

I can't say that I have made a statistics analysis of the affect of the flag
on xfstests runtime, but for the -g quick group on small SSD partition,
I did not observe any noticable difference in runtime.

I will try to run some micro benchmarks or look for specific tests that
do many file opens and little io, to get more performance numbers.

Amir.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-15 15:13     ` Amir Goldstein
@ 2016-10-15 17:04       ` Christoph Hellwig
  2016-10-15 20:59         ` Amir Goldstein
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2016-10-15 17:04 UTC (permalink / raw)
  To: Amir Goldstein; +Cc: Christoph Hellwig, Dave Chinner, eguan, fstests

On Sat, Oct 15, 2016 at 06:13:29PM +0300, Amir Goldstein wrote:
> I can't say that I have made a statistics analysis of the affect of the flag
> on xfstests runtime, but for the -g quick group on small SSD partition,
> I did not observe any noticable difference in runtime.
> 
> I will try to run some micro benchmarks or look for specific tests that
> do many file opens and little io, to get more performance numbers.

Yes, if there is no effect at least that's not a problem.  I'd just want
confirmation for that.  In the end we probably don't use xfs_io heavily
parallel on the same fd a lot.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-15 17:04       ` Christoph Hellwig
@ 2016-10-15 20:59         ` Amir Goldstein
  2016-10-16  7:14           ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Amir Goldstein @ 2016-10-15 20:59 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Dave Chinner, eguan, fstests

On Sat, Oct 15, 2016 at 8:04 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Sat, Oct 15, 2016 at 06:13:29PM +0300, Amir Goldstein wrote:
>> I can't say that I have made a statistics analysis of the affect of the flag
>> on xfstests runtime, but for the -g quick group on small SSD partition,
>> I did not observe any noticable difference in runtime.
>>
>> I will try to run some micro benchmarks or look for specific tests that
>> do many file opens and little io, to get more performance numbers.
>

Here goes.
I ran a simple micro benchmark of running 'xfs_io -c quit' 1000 times
with and without -M flag and the -M flags adds 0.1sec (pthread_ctreate
I suppose)

Looked for a test that runs a lot of xfs_io. found generic/032, which runs
xfs_io 1700 times, mostly for pwrite. This is not a CPU intensive
test, but there is
an avg. runtime difference of +0.2sec for -M flag (out of 8sec).

Taking a look at the runtime difference of entire -g quick did not yield any
obvious changes, all reported runtimes were within the +/-1sec margin,
some were clearly noise as the tests where not running xfs_io at all.

Still I looked closer for tests that do a lot of small read/writes and I found
generic/130, which does many small preads, but from few xfs_io runs.
This is a more CPU intensive test.
There is an avg. runtime difference of +0.3sec for -M flag (out of 4sec).

So far so good, but then I looked closer at its sister test
generic/132, which is
an even more CPU intensive test, also of many small reads and writes
from few xfs_io runs.
This is not a 'quick' group test.
Here the runtime difference was significant 17sec without -M and 20sec
with -M flag.

So without looking much closer into other non quick tests, I think
that perhaps the
best value option is to turn on -M flag for all the quick tests.

What do you think?


> Yes, if there is no effect at least that's not a problem.  I'd just want
> confirmation for that.  In the end we probably don't use xfs_io heavily
> parallel on the same fd a lot.

So there is an effect on specific tests that end up calling fdget() a
lot compared
to the amount of io they generate, but I don't think that we have to
use xfs_io in
parallel on the same fd to see the regression.
The fast path optimization for single threaded process avoids the
rcu_read_lock()
in __fget() altogether and with multi threaded process we take the
rcu_read_lock()
and other stuff even though we are the only process using this fd.

This is just my speculation as I did not run perf analysis on those
fdget intensive tests.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-15 20:59         ` Amir Goldstein
@ 2016-10-16  7:14           ` Christoph Hellwig
  2016-10-16  8:51             ` Amir Goldstein
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2016-10-16  7:14 UTC (permalink / raw)
  To: Amir Goldstein; +Cc: Christoph Hellwig, Dave Chinner, eguan, fstests

On Sat, Oct 15, 2016 at 11:59:22PM +0300, Amir Goldstein wrote:
> So far so good, but then I looked closer at its sister test
> generic/132, which is
> an even more CPU intensive test, also of many small reads and writes
> from few xfs_io runs.
> This is not a 'quick' group test.
> Here the runtime difference was significant 17sec without -M and 20sec
> with -M flag.
> 
> So without looking much closer into other non quick tests, I think
> that perhaps the
> best value option is to turn on -M flag for all the quick tests.
> 
> What do you think?

Sounds like a good idea, now how do we find out in the xfs_io
helper if it's a quick test?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] fstests: run xfs_io as multi threaded process
  2016-10-16  7:14           ` Christoph Hellwig
@ 2016-10-16  8:51             ` Amir Goldstein
  0 siblings, 0 replies; 8+ messages in thread
From: Amir Goldstein @ 2016-10-16  8:51 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Dave Chinner, Eryu Guan, fstests

On Sun, Oct 16, 2016 at 10:14 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Sat, Oct 15, 2016 at 11:59:22PM +0300, Amir Goldstein wrote:
>> So far so good, but then I looked closer at its sister test
>> generic/132, which is
>> an even more CPU intensive test, also of many small reads and writes
>> from few xfs_io runs.
>> This is not a 'quick' group test.
>> Here the runtime difference was significant 17sec without -M and 20sec
>> with -M flag.
>>
>> So without looking much closer into other non quick tests, I think
>> that perhaps the
>> best value option is to turn on -M flag for all the quick tests.
>>
>> What do you think?
>
> Sounds like a good idea, now how do we find out in the xfs_io
> helper if it's a quick test?

See answer in posted v2

Thanks for review!

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-10-16  8:52 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-14 20:43 [PATCH 1/2] fstests: fix call sites that used xfs_io directly Amir Goldstein
2016-10-14 20:43 ` [PATCH 2/2] fstests: run xfs_io as multi threaded process Amir Goldstein
2016-10-15  9:11   ` Christoph Hellwig
2016-10-15 15:13     ` Amir Goldstein
2016-10-15 17:04       ` Christoph Hellwig
2016-10-15 20:59         ` Amir Goldstein
2016-10-16  7:14           ` Christoph Hellwig
2016-10-16  8:51             ` Amir Goldstein

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.