All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch, v3] add an aio test which closes the fd before destroying the ioctx
@ 2014-06-24 19:34 Jeff Moyer
  2014-06-26 12:55 ` Brian Foster
  2014-08-20 22:57 ` Dave Chinner
  0 siblings, 2 replies; 14+ messages in thread
From: Jeff Moyer @ 2014-06-24 19:34 UTC (permalink / raw)
  To: fstests


By closing the file descriptor before calling io_destroy, you pretty
much guarantee that the last put on the ioctx will be done in interrupt
context (during I/O completion).  This behavior has unearthed bugs in
the kernel in several different kernel versions, so let's add a test to
poke at it.

The original test case was provided by Matt Cross.  He has graciously
relicensed it under the GPL v2 or later so that it can be included in
xfstests.  I've modified the test a bit so that it would generate a
stable output format and to run for a fixed amount of time.

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>

---
Changes since v2:
- fixed up fd leak
- removed a stale comment
- reducted test time to 60s per iteration (so 2 minutes total)

Changes since v1:
- fixed up coding style
- incorporated other stylistic review comments from dchinner
- fixed the copyright
- use xfs_io instead of dd

diff --git a/src/aio-dio-regress/aio-last-ref-held-by-io.c b/src/aio-dio-regress/aio-last-ref-held-by-io.c
new file mode 100644
index 0000000..a73dc3b
--- /dev/null
+++ b/src/aio-dio-regress/aio-last-ref-held-by-io.c
@@ -0,0 +1,239 @@
+/* Copyright (C) 2010, Matthew E. Cross <matt.cross@gmail.com>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+/* Code to reproduce the aio lockup.
+ *
+ * Make a test file that is at least 4MB long.  Something like this:
+ * 'dd if=/dev/zero of=/tmp/testfile bs=1M count=10'
+ *
+ * Run this test as './aio_test 0 100 /tmp/testfile' to induce the
+ * failure.
+ *
+ * Run this test as './aio_test 1 100 /tmp/testfile' to demonstrate an
+ * incomplete workaround (close fd, then wait for all io to complete
+ * on an io context before calling io_destroy()).  This still induces
+ * the failure.
+ *
+ * This test was written several years ago by Matt Cross, and he has
+ * graciously allowed me to post it for inclusion in xfstests.
+ *
+ * Changelog
+ * - reduce output and make it consistent for integration into xfstests (JEM)
+ * - run for fixed amount of time instead of indefinitely (JEM)
+ * - change coding style to meet xfstests standards (JEM)
+ * - get rid of unused code (workaround 2 documented above) (JEM)
+ * - use posix_memalign (JEM)
+ */
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE /* to get definition of O_DIRECT flag. */
+#endif
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <libgen.h>
+#include <pthread.h>
+#include <unistd.h>
+#include <libaio.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/syscall.h>
+#include <sys/time.h>
+#include <fcntl.h>
+#include <sched.h>
+
+#undef DEBUG
+#ifdef DEBUG
+#define dprintf(fmt, args...) printf(fmt, ##args)
+#else
+#define dprintf(fmt, args...)
+#endif
+
+char *filename;
+int wait_for_events = 0;
+
+pthread_mutex_t count_mutex = PTHREAD_MUTEX_INITIALIZER;
+unsigned long total_loop_count = 0;
+
+#define NUM_IOS 16
+#define IOSIZE (1024 * 64)
+
+pid_t
+gettid(void)
+{
+	return (pid_t)syscall(SYS_gettid);
+}
+
+void *
+aio_test_thread(void *data)
+{
+	int fd = -1;
+	io_context_t ioctx;
+	int ioctx_initted;
+	int ios_submitted;
+	struct iocb iocbs[NUM_IOS];
+	int i;
+	static unsigned char *buffer;
+	int ret;
+	long mycpu = (long)data;
+	pid_t mytid = gettid();
+	cpu_set_t cpuset;
+
+	dprintf("setting thread %d to run on cpu %ld\n", mytid, mycpu);
+
+	/*
+	 * Problems have been easier to trigger when spreading the
+	 * workload over the available CPUs.
+	 */
+	CPU_ZERO(&cpuset);
+	CPU_SET(mycpu, &cpuset);
+	if (sched_setaffinity(mytid, sizeof(cpuset), &cpuset)) {
+		printf("FAILED to set thread %d to run on cpu %ld\n",
+		       mytid, mycpu);
+	}
+
+	ioctx_initted = 0;
+	ios_submitted = 0;
+
+	ret = posix_memalign((void **)&buffer, getpagesize(), IOSIZE);
+	if (ret != 0) {
+		printf("%lu: Failed to allocate buffer for IO: %d\n",
+		       pthread_self(), ret);
+		goto done;
+	}
+
+	while (1) {
+		fd = open(filename, O_RDONLY | O_DIRECT);
+		if (fd < 0) {
+			printf("%lu: Failed to open file '%s'\n",
+			       pthread_self(), filename);
+			goto done;
+		}
+
+		memset(&ioctx, 0, sizeof(ioctx));
+		if (io_setup(NUM_IOS, &ioctx)) {
+			printf("%lu: Failed to setup io context\n",
+			       pthread_self());
+			goto done;
+		}
+		ioctx_initted = 1;
+
+		if (mycpu != 0) {
+			for (i = 0; i < NUM_IOS; i++) {
+				struct iocb *iocb = &iocbs[i];
+
+				memset(iocb, 0, sizeof(*iocb));
+				io_prep_pread(iocb, fd, buffer,
+					      IOSIZE, i * IOSIZE);
+				if (io_submit(ioctx, 1, &iocb) != 1) {
+					printf("%lu: failed to submit io #%d\n",
+						pthread_self(), i+1);
+				}
+			}
+			ios_submitted = 1;
+		}
+
+done:
+		if (fd >= 0)
+			close(fd);
+
+		if (wait_for_events && ios_submitted) {
+			struct io_event io_events[NUM_IOS];
+
+			if (io_getevents(ioctx, NUM_IOS, NUM_IOS,
+					 io_events, NULL) != NUM_IOS)
+				printf("io_getevents failed to wait for all IO\n");
+		}
+
+		if (ioctx_initted) {
+			io_destroy(ioctx);
+			ioctx_initted = 0;
+		}
+
+		if (ios_submitted) {
+			pthread_mutex_lock(&count_mutex);
+			total_loop_count++;
+			pthread_mutex_unlock(&count_mutex);
+
+			ios_submitted = 0;
+		}
+	}
+}
+
+int
+main(int argc, char **argv)
+{
+	unsigned num_threads;
+	unsigned i;
+	int fd;
+	pthread_t *threads;
+	long ncpus = sysconf(_SC_NPROCESSORS_ONLN);
+	struct timeval start, now, delta = { 0, 0 };
+
+	if (argc != 4) {
+		printf("Usage: aio_test [wait for events?] [# of threads] "
+		       "[filename]\n");
+		return -1;
+	}
+
+	wait_for_events = strtoul(argv[1], NULL, 0);
+	num_threads = strtoul(argv[2], NULL, 0);
+	filename = argv[3];
+
+	printf("wait_for_events: %d\n", wait_for_events);
+	printf("num_threads: %u\n", num_threads);
+	printf("filename: '%s'\n", basename(filename));
+
+	if (num_threads < 1) {
+		printf("Number of threads is invalid, must be at least 1\n");
+		return -1;
+	}
+
+	fd = open(filename, O_RDONLY|O_DIRECT);
+	if (fd < 0) {
+		printf("Failed to open filename '%s' for reading\n", filename);
+		return -1;
+	}
+	close(fd);
+
+	threads = malloc(sizeof(pthread_t) * num_threads);
+	if (threads == NULL) {
+		printf("Failed to allocate thread id storage\n");
+		return -1;
+	}
+
+	for (i = 0; i < num_threads; i++) {
+		if (pthread_create(&threads[i], NULL,
+				   aio_test_thread, (void *)(i % ncpus))) {
+			printf("Failed to create thread #%u\n", i+1);
+			threads[i] = (pthread_t)-1;
+		}
+	}
+
+	printf("All threads spawned\n");
+	gettimeofday(&start, NULL);
+
+	while (delta.tv_sec < 60) {
+		sleep(1);
+		gettimeofday(&now, NULL);
+		timersub(&now, &start, &delta);
+		dprintf("%lu loops completed in %ld seconds\n",
+			total_loop_count, delta.tv_sec);
+	}
+
+	return 0;
+}
diff --git a/tests/generic/323 b/tests/generic/323
new file mode 100644
index 0000000..b84cfc8
--- /dev/null
+++ b/tests/generic/323
@@ -0,0 +1,67 @@
+#! /bin/bash
+# FS QA Test No. 323
+#
+# Run aio-last-ref-held-by-io - last put of ioctx not in process
+# context. We've had a couple of instances in the past where having the
+# last reference to an ioctx be held by the IO (instead of the
+# process) would cause problems (hung system, crashes).
+
+#-----------------------------------------------------------------------
+# Copyright (c) 2014 Red Hat, Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+
+# real QA test starts here
+
+_supported_fs generic
+_supported_os Linux
+
+_require_aiodio aio-last-ref-held-by-io
+
+testfile=$TEST_DIR/aio-testfile
+$XFS_IO_PROG -ftc "pwrite 0 10m" $testfile | _filter_xfs_io
+
+$AIO_TEST 0 100 $testfile
+if [ $? -ne 0 ]; then
+	exit $status
+fi
+
+$AIO_TEST 1 100 $testfile
+if [ $? -ne 0 ]; then
+	exit $status
+fi
+
+status=0
+exit $status
diff --git a/tests/generic/323.out b/tests/generic/323.out
new file mode 100644
index 0000000..1baaae7
--- /dev/null
+++ b/tests/generic/323.out
@@ -0,0 +1,11 @@
+QA output created by 323
+wrote 10485760/10485760 bytes at offset 0
+XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wait_for_events: 0
+num_threads: 100
+filename: 'aio-testfile'
+All threads spawned
+wait_for_events: 1
+num_threads: 100
+filename: 'aio-testfile'
+All threads spawned
diff --git a/tests/generic/group b/tests/generic/group
index e851c62..f45399c 100644
--- a/tests/generic/group
+++ b/tests/generic/group
@@ -141,3 +141,4 @@
 320 auto rw
 321 auto quick metadata log
 322 auto quick metadata log
+323 auto aio stress

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-06-24 19:34 [patch, v3] add an aio test which closes the fd before destroying the ioctx Jeff Moyer
@ 2014-06-26 12:55 ` Brian Foster
  2014-08-20 22:57 ` Dave Chinner
  1 sibling, 0 replies; 14+ messages in thread
From: Brian Foster @ 2014-06-26 12:55 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: fstests

On Tue, Jun 24, 2014 at 03:34:27PM -0400, Jeff Moyer wrote:
> 
> By closing the file descriptor before calling io_destroy, you pretty
> much guarantee that the last put on the ioctx will be done in interrupt
> context (during I/O completion).  This behavior has unearthed bugs in
> the kernel in several different kernel versions, so let's add a test to
> poke at it.
> 
> The original test case was provided by Matt Cross.  He has graciously
> relicensed it under the GPL v2 or later so that it can be included in
> xfstests.  I've modified the test a bit so that it would generate a
> stable output format and to run for a fixed amount of time.
> 
> Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
> 

Looks Ok to me..

Reviewed-by: Brian Foster <bfoster@redhat.com>

> ---
> Changes since v2:
> - fixed up fd leak
> - removed a stale comment
> - reducted test time to 60s per iteration (so 2 minutes total)
> 
> Changes since v1:
> - fixed up coding style
> - incorporated other stylistic review comments from dchinner
> - fixed the copyright
> - use xfs_io instead of dd
> 
> diff --git a/src/aio-dio-regress/aio-last-ref-held-by-io.c b/src/aio-dio-regress/aio-last-ref-held-by-io.c
> new file mode 100644
> index 0000000..a73dc3b
> --- /dev/null
> +++ b/src/aio-dio-regress/aio-last-ref-held-by-io.c
> @@ -0,0 +1,239 @@
> +/* Copyright (C) 2010, Matthew E. Cross <matt.cross@gmail.com>
> + *
> + *  This program is free software; you can redistribute it and/or modify
> + *  it under the terms of the GNU General Public License as published by
> + *  the Free Software Foundation; either version 2 of the License, or
> + *  (at your option) any later version.
> + *
> + *  This program is distributed in the hope that it will be useful,
> + *  but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *  GNU General Public License for more details.
> + *
> + *  You should have received a copy of the GNU General Public License along
> + *  with this program; if not, write to the Free Software Foundation, Inc.,
> + *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
> + */
> +
> +/* Code to reproduce the aio lockup.
> + *
> + * Make a test file that is at least 4MB long.  Something like this:
> + * 'dd if=/dev/zero of=/tmp/testfile bs=1M count=10'
> + *
> + * Run this test as './aio_test 0 100 /tmp/testfile' to induce the
> + * failure.
> + *
> + * Run this test as './aio_test 1 100 /tmp/testfile' to demonstrate an
> + * incomplete workaround (close fd, then wait for all io to complete
> + * on an io context before calling io_destroy()).  This still induces
> + * the failure.
> + *
> + * This test was written several years ago by Matt Cross, and he has
> + * graciously allowed me to post it for inclusion in xfstests.
> + *
> + * Changelog
> + * - reduce output and make it consistent for integration into xfstests (JEM)
> + * - run for fixed amount of time instead of indefinitely (JEM)
> + * - change coding style to meet xfstests standards (JEM)
> + * - get rid of unused code (workaround 2 documented above) (JEM)
> + * - use posix_memalign (JEM)
> + */
> +
> +#ifndef _GNU_SOURCE
> +#define _GNU_SOURCE /* to get definition of O_DIRECT flag. */
> +#endif
> +
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <libgen.h>
> +#include <pthread.h>
> +#include <unistd.h>
> +#include <libaio.h>
> +#include <sys/types.h>
> +#include <sys/stat.h>
> +#include <sys/syscall.h>
> +#include <sys/time.h>
> +#include <fcntl.h>
> +#include <sched.h>
> +
> +#undef DEBUG
> +#ifdef DEBUG
> +#define dprintf(fmt, args...) printf(fmt, ##args)
> +#else
> +#define dprintf(fmt, args...)
> +#endif
> +
> +char *filename;
> +int wait_for_events = 0;
> +
> +pthread_mutex_t count_mutex = PTHREAD_MUTEX_INITIALIZER;
> +unsigned long total_loop_count = 0;
> +
> +#define NUM_IOS 16
> +#define IOSIZE (1024 * 64)
> +
> +pid_t
> +gettid(void)
> +{
> +	return (pid_t)syscall(SYS_gettid);
> +}
> +
> +void *
> +aio_test_thread(void *data)
> +{
> +	int fd = -1;
> +	io_context_t ioctx;
> +	int ioctx_initted;
> +	int ios_submitted;
> +	struct iocb iocbs[NUM_IOS];
> +	int i;
> +	static unsigned char *buffer;
> +	int ret;
> +	long mycpu = (long)data;
> +	pid_t mytid = gettid();
> +	cpu_set_t cpuset;
> +
> +	dprintf("setting thread %d to run on cpu %ld\n", mytid, mycpu);
> +
> +	/*
> +	 * Problems have been easier to trigger when spreading the
> +	 * workload over the available CPUs.
> +	 */
> +	CPU_ZERO(&cpuset);
> +	CPU_SET(mycpu, &cpuset);
> +	if (sched_setaffinity(mytid, sizeof(cpuset), &cpuset)) {
> +		printf("FAILED to set thread %d to run on cpu %ld\n",
> +		       mytid, mycpu);
> +	}
> +
> +	ioctx_initted = 0;
> +	ios_submitted = 0;
> +
> +	ret = posix_memalign((void **)&buffer, getpagesize(), IOSIZE);
> +	if (ret != 0) {
> +		printf("%lu: Failed to allocate buffer for IO: %d\n",
> +		       pthread_self(), ret);
> +		goto done;
> +	}
> +
> +	while (1) {
> +		fd = open(filename, O_RDONLY | O_DIRECT);
> +		if (fd < 0) {
> +			printf("%lu: Failed to open file '%s'\n",
> +			       pthread_self(), filename);
> +			goto done;
> +		}
> +
> +		memset(&ioctx, 0, sizeof(ioctx));
> +		if (io_setup(NUM_IOS, &ioctx)) {
> +			printf("%lu: Failed to setup io context\n",
> +			       pthread_self());
> +			goto done;
> +		}
> +		ioctx_initted = 1;
> +
> +		if (mycpu != 0) {
> +			for (i = 0; i < NUM_IOS; i++) {
> +				struct iocb *iocb = &iocbs[i];
> +
> +				memset(iocb, 0, sizeof(*iocb));
> +				io_prep_pread(iocb, fd, buffer,
> +					      IOSIZE, i * IOSIZE);
> +				if (io_submit(ioctx, 1, &iocb) != 1) {
> +					printf("%lu: failed to submit io #%d\n",
> +						pthread_self(), i+1);
> +				}
> +			}
> +			ios_submitted = 1;
> +		}
> +
> +done:
> +		if (fd >= 0)
> +			close(fd);
> +
> +		if (wait_for_events && ios_submitted) {
> +			struct io_event io_events[NUM_IOS];
> +
> +			if (io_getevents(ioctx, NUM_IOS, NUM_IOS,
> +					 io_events, NULL) != NUM_IOS)
> +				printf("io_getevents failed to wait for all IO\n");
> +		}
> +
> +		if (ioctx_initted) {
> +			io_destroy(ioctx);
> +			ioctx_initted = 0;
> +		}
> +
> +		if (ios_submitted) {
> +			pthread_mutex_lock(&count_mutex);
> +			total_loop_count++;
> +			pthread_mutex_unlock(&count_mutex);
> +
> +			ios_submitted = 0;
> +		}
> +	}
> +}
> +
> +int
> +main(int argc, char **argv)
> +{
> +	unsigned num_threads;
> +	unsigned i;
> +	int fd;
> +	pthread_t *threads;
> +	long ncpus = sysconf(_SC_NPROCESSORS_ONLN);
> +	struct timeval start, now, delta = { 0, 0 };
> +
> +	if (argc != 4) {
> +		printf("Usage: aio_test [wait for events?] [# of threads] "
> +		       "[filename]\n");
> +		return -1;
> +	}
> +
> +	wait_for_events = strtoul(argv[1], NULL, 0);
> +	num_threads = strtoul(argv[2], NULL, 0);
> +	filename = argv[3];
> +
> +	printf("wait_for_events: %d\n", wait_for_events);
> +	printf("num_threads: %u\n", num_threads);
> +	printf("filename: '%s'\n", basename(filename));
> +
> +	if (num_threads < 1) {
> +		printf("Number of threads is invalid, must be at least 1\n");
> +		return -1;
> +	}
> +
> +	fd = open(filename, O_RDONLY|O_DIRECT);
> +	if (fd < 0) {
> +		printf("Failed to open filename '%s' for reading\n", filename);
> +		return -1;
> +	}
> +	close(fd);
> +
> +	threads = malloc(sizeof(pthread_t) * num_threads);
> +	if (threads == NULL) {
> +		printf("Failed to allocate thread id storage\n");
> +		return -1;
> +	}
> +
> +	for (i = 0; i < num_threads; i++) {
> +		if (pthread_create(&threads[i], NULL,
> +				   aio_test_thread, (void *)(i % ncpus))) {
> +			printf("Failed to create thread #%u\n", i+1);
> +			threads[i] = (pthread_t)-1;
> +		}
> +	}
> +
> +	printf("All threads spawned\n");
> +	gettimeofday(&start, NULL);
> +
> +	while (delta.tv_sec < 60) {
> +		sleep(1);
> +		gettimeofday(&now, NULL);
> +		timersub(&now, &start, &delta);
> +		dprintf("%lu loops completed in %ld seconds\n",
> +			total_loop_count, delta.tv_sec);
> +	}
> +
> +	return 0;
> +}
> diff --git a/tests/generic/323 b/tests/generic/323
> new file mode 100644
> index 0000000..b84cfc8
> --- /dev/null
> +++ b/tests/generic/323
> @@ -0,0 +1,67 @@
> +#! /bin/bash
> +# FS QA Test No. 323
> +#
> +# Run aio-last-ref-held-by-io - last put of ioctx not in process
> +# context. We've had a couple of instances in the past where having the
> +# last reference to an ioctx be held by the IO (instead of the
> +# process) would cause problems (hung system, crashes).
> +
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2014 Red Hat, Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +
> +# real QA test starts here
> +
> +_supported_fs generic
> +_supported_os Linux
> +
> +_require_aiodio aio-last-ref-held-by-io
> +
> +testfile=$TEST_DIR/aio-testfile
> +$XFS_IO_PROG -ftc "pwrite 0 10m" $testfile | _filter_xfs_io
> +
> +$AIO_TEST 0 100 $testfile
> +if [ $? -ne 0 ]; then
> +	exit $status
> +fi
> +
> +$AIO_TEST 1 100 $testfile
> +if [ $? -ne 0 ]; then
> +	exit $status
> +fi
> +
> +status=0
> +exit $status
> diff --git a/tests/generic/323.out b/tests/generic/323.out
> new file mode 100644
> index 0000000..1baaae7
> --- /dev/null
> +++ b/tests/generic/323.out
> @@ -0,0 +1,11 @@
> +QA output created by 323
> +wrote 10485760/10485760 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +wait_for_events: 0
> +num_threads: 100
> +filename: 'aio-testfile'
> +All threads spawned
> +wait_for_events: 1
> +num_threads: 100
> +filename: 'aio-testfile'
> +All threads spawned
> diff --git a/tests/generic/group b/tests/generic/group
> index e851c62..f45399c 100644
> --- a/tests/generic/group
> +++ b/tests/generic/group
> @@ -141,3 +141,4 @@
>  320 auto rw
>  321 auto quick metadata log
>  322 auto quick metadata log
> +323 auto aio stress
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-06-24 19:34 [patch, v3] add an aio test which closes the fd before destroying the ioctx Jeff Moyer
  2014-06-26 12:55 ` Brian Foster
@ 2014-08-20 22:57 ` Dave Chinner
  2014-08-20 23:43   ` Jeff Moyer
  1 sibling, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2014-08-20 22:57 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: fstests

On Tue, Jun 24, 2014 at 03:34:27PM -0400, Jeff Moyer wrote:
> 
> By closing the file descriptor before calling io_destroy, you pretty
> much guarantee that the last put on the ioctx will be done in interrupt
> context (during I/O completion).  This behavior has unearthed bugs in
> the kernel in several different kernel versions, so let's add a test to
> poke at it.
> 
> The original test case was provided by Matt Cross.  He has graciously
> relicensed it under the GPL v2 or later so that it can be included in
> xfstests.  I've modified the test a bit so that it would generate a
> stable output format and to run for a fixed amount of time.
> 
> Signed-off-by: Jeff Moyer <jmoyer@redhat.com>

Jeff, this test is causing xfstests to fail unmounts with EBUSY
frequently on some of my test VMs (i.e. in >60% of my test runs in
the past week).

$ sudo MKFS_OPTIONS="-m crc=1,finobt=1" ./check generic/323
FSTYP         -- xfs (debug)
PLATFORM      -- Linux/x86_64 test2 3.16.0-dgc+
MKFS_OPTIONS  -- -f -m crc=1,finobt=1 /dev/vdb
MOUNT_OPTIONS -- /dev/vdb /mnt/scratch

generic/323 121s ... 121s
umount: /mnt/test: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
_check_xfs_filesystem: filesystem on /dev/vda has dirty log (see /home/dave/src/xfstests-dev/results//generic/323.full)
_check_xfs_filesystem: filesystem on /dev/vda is inconsistent (c) (see /home/dave/src/xfstests-dev/results//generic/323.full)
_check_xfs_filesystem: filesystem on /dev/vda is inconsistent (r) (see /home/dave/src/xfstests-dev/results//generic/323.full)
Ran: generic/323
Passed all 1 tests
$ sudo umount /mnt/test
$

i.e. something that the test is doing it leaving the superblock
referenced after all the processes have finished and exited, but an
immediate unmount after the test fails works just fine. So the
situation only persists for a couple of seconds. Adding a "sleep 5"
to the test just before it exits also makes the failure go away.

I have only ever seen this same issue with generic/208 - it's been
doing this randomly for as long as I can remember. That test is also
a aio+dio test and adding the same "sleep 5" makes that test no
longer show the issue.

IOWs, we now have two AIO+DIO tests showing the same symptoms that
no other tests show. This tends to point at AIO not being fully
cleaned up and completely freed by the time the processes
dispatching it have exit()d. This failure generally occurs when
there is other load on the system/disks backing the test VM (e.g.
running xfstests in multiple VMs at the same time) so I suspect it
has to do with IO completion taking a long time.

Can you spend some time trying to reproduce this and getting to the
bottom of whatever is triggering the unmount error?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-20 22:57 ` Dave Chinner
@ 2014-08-20 23:43   ` Jeff Moyer
  2014-08-21  9:16     ` Dave Chinner
  2014-08-21 16:57     ` Zach Brown
  0 siblings, 2 replies; 14+ messages in thread
From: Jeff Moyer @ 2014-08-20 23:43 UTC (permalink / raw)
  To: Dave Chinner; +Cc: fstests

Hi, Dave,

Dave Chinner <david@fromorbit.com> writes:

> IOWs, we now have two AIO+DIO tests showing the same symptoms that
> no other tests show. This tends to point at AIO not being fully
> cleaned up and completely freed by the time the processes
> dispatching it have exit()d. This failure generally occurs when
> there is other load on the system/disks backing the test VM (e.g.
> running xfstests in multiple VMs at the same time) so I suspect it
> has to do with IO completion taking a long time.

Process exit waits for all outstanding I/O, but maybe it's an rcu thing.

> Can you spend some time trying to reproduce this and getting to the
> bottom of whatever is triggering the unmount error?

I can take a look, but not until next week.  Hopefully that's ok?

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-20 23:43   ` Jeff Moyer
@ 2014-08-21  9:16     ` Dave Chinner
  2014-08-21 16:57     ` Zach Brown
  1 sibling, 0 replies; 14+ messages in thread
From: Dave Chinner @ 2014-08-21  9:16 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: fstests

On Wed, Aug 20, 2014 at 07:43:19PM -0400, Jeff Moyer wrote:
> Hi, Dave,
> 
> Dave Chinner <david@fromorbit.com> writes:
> 
> > IOWs, we now have two AIO+DIO tests showing the same symptoms that
> > no other tests show. This tends to point at AIO not being fully
> > cleaned up and completely freed by the time the processes
> > dispatching it have exit()d. This failure generally occurs when
> > there is other load on the system/disks backing the test VM (e.g.
> > running xfstests in multiple VMs at the same time) so I suspect it
> > has to do with IO completion taking a long time.
> 
> Process exit waits for all outstanding I/O, but maybe it's an rcu thing.

Hmmm - __fput()?

> > Can you spend some time trying to reproduce this and getting to the
> > bottom of whatever is triggering the unmount error?
> 
> I can take a look, but not until next week.  Hopefully that's ok?

Yeah, that's fine. I don't have the bandwidth to look at it either
right now, so I'll just expunge the test for the meantime.

Thanks, Jeff!

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-20 23:43   ` Jeff Moyer
  2014-08-21  9:16     ` Dave Chinner
@ 2014-08-21 16:57     ` Zach Brown
  2014-08-25 16:50       ` Benjamin LaHaise
  1 sibling, 1 reply; 14+ messages in thread
From: Zach Brown @ 2014-08-21 16:57 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Dave Chinner, fstests, Benjamin LaHaise

On Wed, Aug 20, 2014 at 07:43:19PM -0400, Jeff Moyer wrote:
> Hi, Dave,
> 
> Dave Chinner <david@fromorbit.com> writes:
> 
> > IOWs, we now have two AIO+DIO tests showing the same symptoms that
> > no other tests show. This tends to point at AIO not being fully
> > cleaned up and completely freed by the time the processes
> > dispatching it have exit()d. This failure generally occurs when
> > there is other load on the system/disks backing the test VM (e.g.
> > running xfstests in multiple VMs at the same time) so I suspect it
> > has to do with IO completion taking a long time.
> 
> Process exit waits for all outstanding I/O, but maybe it's an rcu thing.

I thought it did too but it doesn't look like upstream exit_aio() is
waiting for iocbs to complete.

Ben, are you digging in to this?  Want me to throw something together?

- z

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-21 16:57     ` Zach Brown
@ 2014-08-25 16:50       ` Benjamin LaHaise
  2014-08-25 17:55         ` Jeff Moyer
                           ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Benjamin LaHaise @ 2014-08-25 16:50 UTC (permalink / raw)
  To: Zach Brown; +Cc: Jeff Moyer, Dave Chinner, fstests

On Thu, Aug 21, 2014 at 09:57:50AM -0700, Zach Brown wrote:
> On Wed, Aug 20, 2014 at 07:43:19PM -0400, Jeff Moyer wrote:
> > Hi, Dave,
> > 
> > Dave Chinner <david@fromorbit.com> writes:
> > 
> > > IOWs, we now have two AIO+DIO tests showing the same symptoms that
> > > no other tests show. This tends to point at AIO not being fully
> > > cleaned up and completely freed by the time the processes
> > > dispatching it have exit()d. This failure generally occurs when
> > > there is other load on the system/disks backing the test VM (e.g.
> > > running xfstests in multiple VMs at the same time) so I suspect it
> > > has to do with IO completion taking a long time.
> > 
> > Process exit waits for all outstanding I/O, but maybe it's an rcu thing.
> 
> I thought it did too but it doesn't look like upstream exit_aio() is
> waiting for iocbs to complete.
> 
> Ben, are you digging in to this?  Want me to throw something together?

Something like the following should fix it.  This is only lightly tested.  
Does someone already have a simple test case we can add to the libaio test 
suite to verify this behaviour?  I'm assuming that waiting for one ioctx 
at a time is sufficient and we don't need to parallelise cancellation at 
exit.

		-ben
-- 
"Thought is the essence of where you are now."

diff --git a/fs/aio.c b/fs/aio.c
index 97bc62c..c558e9a 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -732,7 +732,6 @@ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
 	if (atomic_xchg(&ctx->dead, 1))
 		return -EINVAL;
 
-
 	spin_lock(&mm->ioctx_lock);
 	table = rcu_dereference_raw(mm->ioctx_table);
 	WARN_ON(ctx != table->table[ctx->id]);
@@ -792,6 +791,8 @@ void exit_aio(struct mm_struct *mm)
 		return;
 
 	for (i = 0; i < table->nr; ++i) {
+		struct completion requests_done =
+			COMPLETION_INITIALIZER_ONSTACK(requests_done);
 		struct kioctx *ctx = table->table[i];
 
 		if (!ctx)
@@ -804,7 +805,8 @@ void exit_aio(struct mm_struct *mm)
 		 * that it needs to unmap the area, just set it to 0.
 		 */
 		ctx->mmap_size = 0;
-		kill_ioctx(mm, ctx, NULL);
+		if (!kill_ioctx(mm, ctx, &requests_done))
+			wait_for_completion(&requests_done);
 	}
 
 	RCU_INIT_POINTER(mm->ioctx_table, NULL);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-25 16:50       ` Benjamin LaHaise
@ 2014-08-25 17:55         ` Jeff Moyer
  2014-08-25 23:12         ` Dave Chinner
  2014-08-26 16:05         ` Jeff Moyer
  2 siblings, 0 replies; 14+ messages in thread
From: Jeff Moyer @ 2014-08-25 17:55 UTC (permalink / raw)
  To: Benjamin LaHaise; +Cc: Zach Brown, Dave Chinner, fstests

Benjamin LaHaise <bcrl@kvack.org> writes:

> Does someone already have a simple test case we can add to the libaio test 
> suite to verify this behaviour?

Well, we're running into this with xfstests, so we have a unit test that
will exit w/o waiting on iocbs.  Follow that up with a umount of the fs,
and it should be doable in the libaio test harness.  I'll cook something
up.

> I'm assuming that waiting for one ioctx at a time is sufficient and we
> don't need to parallelise cancellation at exit.

I don't know of any workload that would be adversely affected, for
whatever that's worth.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-25 16:50       ` Benjamin LaHaise
  2014-08-25 17:55         ` Jeff Moyer
@ 2014-08-25 23:12         ` Dave Chinner
  2014-08-26 16:05         ` Jeff Moyer
  2 siblings, 0 replies; 14+ messages in thread
From: Dave Chinner @ 2014-08-25 23:12 UTC (permalink / raw)
  To: Benjamin LaHaise; +Cc: Zach Brown, Jeff Moyer, fstests

On Mon, Aug 25, 2014 at 12:50:43PM -0400, Benjamin LaHaise wrote:
> On Thu, Aug 21, 2014 at 09:57:50AM -0700, Zach Brown wrote:
> > On Wed, Aug 20, 2014 at 07:43:19PM -0400, Jeff Moyer wrote:
> > > Hi, Dave,
> > > 
> > > Dave Chinner <david@fromorbit.com> writes:
> > > 
> > > > IOWs, we now have two AIO+DIO tests showing the same symptoms that
> > > > no other tests show. This tends to point at AIO not being fully
> > > > cleaned up and completely freed by the time the processes
> > > > dispatching it have exit()d. This failure generally occurs when
> > > > there is other load on the system/disks backing the test VM (e.g.
> > > > running xfstests in multiple VMs at the same time) so I suspect it
> > > > has to do with IO completion taking a long time.
> > > 
> > > Process exit waits for all outstanding I/O, but maybe it's an rcu thing.
> > 
> > I thought it did too but it doesn't look like upstream exit_aio() is
> > waiting for iocbs to complete.
> > 
> > Ben, are you digging in to this?  Want me to throw something together?
> 
> Something like the following should fix it.  This is only lightly tested.  
> Does someone already have a simple test case we can add to the libaio test 
> suite to verify this behaviour?  I'm assuming that waiting for one ioctx 
> at a time is sufficient and we don't need to parallelise cancellation at 
> exit.

both xfstests::generic/208 and xfstests::generic/323 reproduce this.
I'm seeing a long term failure rate (i.e. over the past year) of
around 15% for generic/208 on my test VMs....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-25 16:50       ` Benjamin LaHaise
  2014-08-25 17:55         ` Jeff Moyer
  2014-08-25 23:12         ` Dave Chinner
@ 2014-08-26 16:05         ` Jeff Moyer
  2014-08-26 17:27           ` Zach Brown
  2 siblings, 1 reply; 14+ messages in thread
From: Jeff Moyer @ 2014-08-26 16:05 UTC (permalink / raw)
  To: Benjamin LaHaise; +Cc: Zach Brown, Dave Chinner, fstests

Benjamin LaHaise <bcrl@kvack.org> writes:

> Does someone already have a simple test case we can add to the libaio test 
> suite to verify this behaviour?

I can't reproduce this problem using a loop device, which is what the
libaio test suite uses.  Even when using real hardware, you have to have
disks that are slow enough in order for this to trigger reliably (or
at all).  Given those two points, I think it might make sense to
continue to use xfstests to find this problem for us, since it already
has the infrastructure in place to test real devices and since it
already triggers the problem.  I could write a more targeted test within
xfstests, but I don't think that's strictly necessary (it would just
make it more clear what the expectations are, and maybe bump the hit
rate percentage up).

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-26 16:05         ` Jeff Moyer
@ 2014-08-26 17:27           ` Zach Brown
  2014-08-26 17:32             ` Jeff Moyer
  2014-08-27  8:49             ` Dave Chinner
  0 siblings, 2 replies; 14+ messages in thread
From: Zach Brown @ 2014-08-26 17:27 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Benjamin LaHaise, Dave Chinner, fstests

On Tue, Aug 26, 2014 at 12:05:11PM -0400, Jeff Moyer wrote:
> Benjamin LaHaise <bcrl@kvack.org> writes:
> 
> > Does someone already have a simple test case we can add to the libaio test 
> > suite to verify this behaviour?
> 
> I can't reproduce this problem using a loop device, which is what the
> libaio test suite uses.  Even when using real hardware, you have to have
> disks that are slow enough in order for this to trigger reliably (or
> at all).

I wonder if you could use something like dm suspend to abuse indefinite
latencies.

> I could write a more targeted test within xfstests, but I don't think
> that's strictly necessary (it would just make it more clear what the
> expectations are, and maybe bump the hit rate percentage up).

I think it'd be worth it (he says, not commiting *his* time).  It would
have been nice if a targeted test helped Dave raise the alarm
immediately rather than gnaw away at his brain with inconsistent mostly
unrelated failures for months.

- z

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-26 17:27           ` Zach Brown
@ 2014-08-26 17:32             ` Jeff Moyer
  2014-08-27  8:49             ` Dave Chinner
  1 sibling, 0 replies; 14+ messages in thread
From: Jeff Moyer @ 2014-08-26 17:32 UTC (permalink / raw)
  To: Zach Brown; +Cc: Benjamin LaHaise, Dave Chinner, fstests

Zach Brown <zab@zabbo.net> writes:

> On Tue, Aug 26, 2014 at 12:05:11PM -0400, Jeff Moyer wrote:
>> Benjamin LaHaise <bcrl@kvack.org> writes:
>> 
>> > Does someone already have a simple test case we can add to the libaio test 
>> > suite to verify this behaviour?
>> 
>> I can't reproduce this problem using a loop device, which is what the
>> libaio test suite uses.  Even when using real hardware, you have to have
>> disks that are slow enough in order for this to trigger reliably (or
>> at all).
>
> I wonder if you could use something like dm suspend to abuse indefinite
> latencies.

Or perhaps dm-delay.

>> I could write a more targeted test within xfstests, but I don't think
>> that's strictly necessary (it would just make it more clear what the
>> expectations are, and maybe bump the hit rate percentage up).
>
> I think it'd be worth it (he says, not commiting *his* time).  It would
> have been nice if a targeted test helped Dave raise the alarm
> immediately rather than gnaw away at his brain with inconsistent mostly
> unrelated failures for months.

Sure.

-Jeff

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-26 17:27           ` Zach Brown
  2014-08-26 17:32             ` Jeff Moyer
@ 2014-08-27  8:49             ` Dave Chinner
  2014-08-27 10:08               ` Dave Chinner
  1 sibling, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2014-08-27  8:49 UTC (permalink / raw)
  To: Zach Brown; +Cc: Jeff Moyer, Benjamin LaHaise, fstests

On Tue, Aug 26, 2014 at 10:27:40AM -0700, Zach Brown wrote:
> On Tue, Aug 26, 2014 at 12:05:11PM -0400, Jeff Moyer wrote:
> > Benjamin LaHaise <bcrl@kvack.org> writes:
> > 
> > > Does someone already have a simple test case we can add to the libaio test 
> > > suite to verify this behaviour?
> > 
> > I can't reproduce this problem using a loop device, which is what the
> > libaio test suite uses.  Even when using real hardware, you have to have
> > disks that are slow enough in order for this to trigger reliably (or
> > at all).
> 
> I wonder if you could use something like dm suspend to abuse indefinite
> latencies.
> 
> > I could write a more targeted test within xfstests, but I don't think
> > that's strictly necessary (it would just make it more clear what the
> > expectations are, and maybe bump the hit rate percentage up).
> 
> I think it'd be worth it (he says, not commiting *his* time).  It would
> have been nice if a targeted test helped Dave raise the alarm
> immediately rather than gnaw away at his brain with inconsistent mostly
> unrelated failures for months.

I'm not sure it's worth the effort. now we have two tests that have
triggered the same problem, I've been easily able to reproduce it
with 2 VMs with test/scratch image files sharing the same spindle.
i.e. run xfstests in one VM, run generic/323 in the other VM, and
it reproduces fairly easily.

I'm just running it in a loop now to measure how successfully I'm
reproducing the problem, then I'll apply the fix and see if it gets
better. If it does get better, then I'll keep the patch around
locally until it is upstream, and then I'll shout whenever I see
this problem occur again....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch, v3] add an aio test which closes the fd before destroying the ioctx
  2014-08-27  8:49             ` Dave Chinner
@ 2014-08-27 10:08               ` Dave Chinner
  0 siblings, 0 replies; 14+ messages in thread
From: Dave Chinner @ 2014-08-27 10:08 UTC (permalink / raw)
  To: Zach Brown; +Cc: Jeff Moyer, Benjamin LaHaise, fstests

On Wed, Aug 27, 2014 at 06:49:22PM +1000, Dave Chinner wrote:
> On Tue, Aug 26, 2014 at 10:27:40AM -0700, Zach Brown wrote:
> > On Tue, Aug 26, 2014 at 12:05:11PM -0400, Jeff Moyer wrote:
> > > Benjamin LaHaise <bcrl@kvack.org> writes:
> > > 
> > > > Does someone already have a simple test case we can add to the libaio test 
> > > > suite to verify this behaviour?
> > > 
> > > I can't reproduce this problem using a loop device, which is what the
> > > libaio test suite uses.  Even when using real hardware, you have to have
> > > disks that are slow enough in order for this to trigger reliably (or
> > > at all).
> > 
> > I wonder if you could use something like dm suspend to abuse indefinite
> > latencies.
> > 
> > > I could write a more targeted test within xfstests, but I don't think
> > > that's strictly necessary (it would just make it more clear what the
> > > expectations are, and maybe bump the hit rate percentage up).
> > 
> > I think it'd be worth it (he says, not commiting *his* time).  It would
> > have been nice if a targeted test helped Dave raise the alarm
> > immediately rather than gnaw away at his brain with inconsistent mostly
> > unrelated failures for months.
> 
> I'm not sure it's worth the effort. now we have two tests that have
> triggered the same problem, I've been easily able to reproduce it
> with 2 VMs with test/scratch image files sharing the same spindle.
> i.e. run xfstests in one VM, run generic/323 in the other VM, and
> it reproduces fairly easily.
> 
> I'm just running it in a loop now to measure how successfully I'm
> reproducing the problem, then I'll apply the fix and see if it gets
> better. If it does get better, then I'll keep the patch around
> locally until it is upstream, and then I'll shout whenever I see
> this problem occur again....

Ok, so of 32 executions in a tight loop of generic/323, only 5
executions passed while 27 failed.

With the patch suggested, it failed the first 5 executions, so I
don't think it fixes the problem.

BTW, generic/323 is pulling 8,000 read IOPS and 500MB/s from my
single spindle. Methinks that the test file is resident in the BBWC
on the RAID controller, which may be why nobody else is reproducing
this problem....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-08-27 10:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-24 19:34 [patch, v3] add an aio test which closes the fd before destroying the ioctx Jeff Moyer
2014-06-26 12:55 ` Brian Foster
2014-08-20 22:57 ` Dave Chinner
2014-08-20 23:43   ` Jeff Moyer
2014-08-21  9:16     ` Dave Chinner
2014-08-21 16:57     ` Zach Brown
2014-08-25 16:50       ` Benjamin LaHaise
2014-08-25 17:55         ` Jeff Moyer
2014-08-25 23:12         ` Dave Chinner
2014-08-26 16:05         ` Jeff Moyer
2014-08-26 17:27           ` Zach Brown
2014-08-26 17:32             ` Jeff Moyer
2014-08-27  8:49             ` Dave Chinner
2014-08-27 10:08               ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.