linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] fix readahead pipeline break caused by block plug
@ 2012-01-31  7:59 Shaohua Li
  2012-01-31  8:36 ` Christoph Hellwig
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Shaohua Li @ 2012-01-31  7:59 UTC (permalink / raw)
  To: lkml, linux-mm
  Cc: Andrew Morton, Jens Axboe, Herbert Poetzl, Eric Dumazet,
	Vivek Goyal, Wu Fengguang

Herbert Poetzl reported a performance regression since 2.6.39. The test
is a simple dd read, but with big block size. The reason is:

T1: ra (A, A+128k), (A+128k, A+256k)
T2: lock_page for page A, submit the 256k
T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
because of plug and there isn't any lock_page till we hit page A+256k
because all pages from A to A+256k is in memory
T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
submitted again.
T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
waitting for (A+256k, A+512k) finish.

There is no request to disk in T3 and T4, so readahead pipeline breaks.

We really don't need block plug for generic_file_aio_read() for buffered
I/O. The readahead already has plug and has fine grained control when I/O
should be submitted. Deleting plug for buffered I/O fixes the regression.

One side effect is plug makes the request size 256k, the size is 128k
without it. This is because default ra size is 128k and not a reason we
need plug here.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Tested-by: Herbert Poetzl <herbert@13thfloor.at>
Tested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>

diff --git a/mm/filemap.c b/mm/filemap.c
index 97f49ed..b662757 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1400,15 +1400,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
 	unsigned long seg = 0;
 	size_t count;
 	loff_t *ppos = &iocb->ki_pos;
-	struct blk_plug plug;
 
 	count = 0;
 	retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
 	if (retval)
 		return retval;
 
-	blk_start_plug(&plug);
-
 	/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
 	if (filp->f_flags & O_DIRECT) {
 		loff_t size;
@@ -1424,8 +1421,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
 			retval = filemap_write_and_wait_range(mapping, pos,
 					pos + iov_length(iov, nr_segs) - 1);
 			if (!retval) {
+				struct blk_plug plug;
+
+				blk_start_plug(&plug);
 				retval = mapping->a_ops->direct_IO(READ, iocb,
 							iov, pos, nr_segs);
+				blk_finish_plug(&plug);
 			}
 			if (retval > 0) {
 				*ppos = pos + retval;
@@ -1481,7 +1482,6 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
 			break;
 	}
 out:
-	blk_finish_plug(&plug);
 	return retval;
 }
 EXPORT_SYMBOL(generic_file_aio_read);



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
@ 2012-01-31  8:36 ` Christoph Hellwig
  2012-01-31  8:48 ` Eric Dumazet
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2012-01-31  8:36 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Vivek Goyal, Wu Fengguang

On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> Herbert Poetzl reported a performance regression since 2.6.39. The test
> is a simple dd read, but with big block size. The reason is:
> 
> T1: ra (A, A+128k), (A+128k, A+256k)
> T2: lock_page for page A, submit the 256k
> T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> because of plug and there isn't any lock_page till we hit page A+256k
> because all pages from A to A+256k is in memory
> T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> submitted again.
> T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> waitting for (A+256k, A+512k) finish.
> 
> There is no request to disk in T3 and T4, so readahead pipeline breaks.
> 
> We really don't need block plug for generic_file_aio_read() for buffered
> I/O. The readahead already has plug and has fine grained control when I/O
> should be submitted. Deleting plug for buffered I/O fixes the regression.
> 
> One side effect is plug makes the request size 256k, the size is 128k
> without it. This is because default ra size is 128k and not a reason we
> need plug here.
> 
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> Tested-by: Herbert Poetzl <herbert@13thfloor.at>
> Tested-by: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>

Please also CC -stable on this.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
  2012-01-31  8:36 ` Christoph Hellwig
@ 2012-01-31  8:48 ` Eric Dumazet
  2012-01-31  8:50   ` Herbert Poetzl
  2012-01-31  8:53   ` Shaohua Li
  2012-01-31 10:20 ` Wu Fengguang
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 28+ messages in thread
From: Eric Dumazet @ 2012-01-31  8:48 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Vivek Goyal, Wu Fengguang

Le mardi 31 janvier 2012 à 15:59 +0800, Shaohua Li a écrit :
> Herbert Poetzl reported a performance regression since 2.6.39. The test
> is a simple dd read, but with big block size. The reason is:
> 
> T1: ra (A, A+128k), (A+128k, A+256k)
> T2: lock_page for page A, submit the 256k
> T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> because of plug and there isn't any lock_page till we hit page A+256k
> because all pages from A to A+256k is in memory
> T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> submitted again.
> T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> waitting for (A+256k, A+512k) finish.
> 
> There is no request to disk in T3 and T4, so readahead pipeline breaks.
> 
> We really don't need block plug for generic_file_aio_read() for buffered
> I/O. The readahead already has plug and has fine grained control when I/O
> should be submitted. Deleting plug for buffered I/O fixes the regression.
> 
> One side effect is plug makes the request size 256k, the size is 128k
> without it. This is because default ra size is 128k and not a reason we
> need plug here.
> 
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> Tested-by: Herbert Poetzl <herbert@13thfloor.at>
> Tested-by: Eric Dumazet <eric.dumazet@gmail.com>

Hmm, this is not exactly the patch I tested from Wu Fengguang 

I'll test this one before adding my "Tested-by: ..."

> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 97f49ed..b662757 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1400,15 +1400,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>  	unsigned long seg = 0;
>  	size_t count;
>  	loff_t *ppos = &iocb->ki_pos;
> -	struct blk_plug plug;
>  
>  	count = 0;
>  	retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
>  	if (retval)
>  		return retval;
>  
> -	blk_start_plug(&plug);
> -
>  	/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
>  	if (filp->f_flags & O_DIRECT) {
>  		loff_t size;
> @@ -1424,8 +1421,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>  			retval = filemap_write_and_wait_range(mapping, pos,
>  					pos + iov_length(iov, nr_segs) - 1);
>  			if (!retval) {
> +				struct blk_plug plug;
> +
> +				blk_start_plug(&plug);

This part was not on the tested patch yesterday.

>  				retval = mapping->a_ops->direct_IO(READ, iocb,
>  							iov, pos, nr_segs);
> +				blk_finish_plug(&plug);
>  			}
>  			if (retval > 0) {
>  				*ppos = pos + retval;
> @@ -1481,7 +1482,6 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>  			break;
>  	}
>  out:
> -	blk_finish_plug(&plug);
>  	return retval;
>  }
>  EXPORT_SYMBOL(generic_file_aio_read);
> 
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  8:48 ` Eric Dumazet
@ 2012-01-31  8:50   ` Herbert Poetzl
  2012-01-31  8:53   ` Shaohua Li
  1 sibling, 0 replies; 28+ messages in thread
From: Herbert Poetzl @ 2012-01-31  8:50 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Shaohua Li, lkml, linux-mm, Andrew Morton, Jens Axboe,
	Vivek Goyal, Wu Fengguang

On Tue, Jan 31, 2012 at 09:48:42AM +0100, Eric Dumazet wrote:
> Le mardi 31 janvier 2012 à 15:59 +0800, Shaohua Li a écrit :
>> Herbert Poetzl reported a performance regression since 2.6.39. The test
>> is a simple dd read, but with big block size. The reason is:

>> T1: ra (A, A+128k), (A+128k, A+256k)
>> T2: lock_page for page A, submit the 256k
>> T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
>> because of plug and there isn't any lock_page till we hit page A+256k
>> because all pages from A to A+256k is in memory
>> T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
>> submitted again.
>> T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
>> waitting for (A+256k, A+512k) finish.

>> There is no request to disk in T3 and T4, so readahead pipeline breaks.

>> We really don't need block plug for generic_file_aio_read() for buffered
>> I/O. The readahead already has plug and has fine grained control when I/O
>> should be submitted. Deleting plug for buffered I/O fixes the regression.

>> One side effect is plug makes the request size 256k, the size is 128k
>> without it. This is because default ra size is 128k and not a reason we
>> need plug here.

>> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
>> Tested-by: Herbert Poetzl <herbert@13thfloor.at>
>> Tested-by: Eric Dumazet <eric.dumazet@gmail.com>

> Hmm, this is not exactly the patch I tested from Wu Fengguang 

> I'll test this one before adding my "Tested-by: ..."

second that

>> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>

>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index 97f49ed..b662757 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -1400,15 +1400,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>>  	unsigned long seg = 0;
>>  	size_t count;
>>  	loff_t *ppos = &iocb->ki_pos;
>> -	struct blk_plug plug;

>>  	count = 0;
>>  	retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
>>  	if (retval)
>>  		return retval;

>> -	blk_start_plug(&plug);
>> -
>>  	/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
>>  	if (filp->f_flags & O_DIRECT) {
>>  		loff_t size;
>> @@ -1424,8 +1421,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>>  			retval = filemap_write_and_wait_range(mapping, pos,
>>  					pos + iov_length(iov, nr_segs) - 1);
>>  			if (!retval) {
>> +				struct blk_plug plug;
>> +
>> +				blk_start_plug(&plug);

> This part was not on the tested patch yesterday.

yep, will test this one tonight ...

>>  				retval = mapping->a_ops->direct_IO(READ, iocb,
>>  							iov, pos, nr_segs);
>> +				blk_finish_plug(&plug);
>>  			}
>>  			if (retval > 0) {
>>  				*ppos = pos + retval;
>> @@ -1481,7 +1482,6 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>>  			break;
>>  	}
>>  out:
>> -	blk_finish_plug(&plug);
>>  	return retval;
>>  }
>>  EXPORT_SYMBOL(generic_file_aio_read);


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  8:48 ` Eric Dumazet
  2012-01-31  8:50   ` Herbert Poetzl
@ 2012-01-31  8:53   ` Shaohua Li
  2012-01-31  9:17     ` Eric Dumazet
  1 sibling, 1 reply; 28+ messages in thread
From: Shaohua Li @ 2012-01-31  8:53 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Vivek Goyal, Wu Fengguang

On Tue, 2012-01-31 at 09:48 +0100, Eric Dumazet wrote:
> Le mardi 31 janvier 2012 à 15:59 +0800, Shaohua Li a écrit :
> > Herbert Poetzl reported a performance regression since 2.6.39. The test
> > is a simple dd read, but with big block size. The reason is:
> > 
> > T1: ra (A, A+128k), (A+128k, A+256k)
> > T2: lock_page for page A, submit the 256k
> > T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> > because of plug and there isn't any lock_page till we hit page A+256k
> > because all pages from A to A+256k is in memory
> > T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> > submitted again.
> > T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> > waitting for (A+256k, A+512k) finish.
> > 
> > There is no request to disk in T3 and T4, so readahead pipeline breaks.
> > 
> > We really don't need block plug for generic_file_aio_read() for buffered
> > I/O. The readahead already has plug and has fine grained control when I/O
> > should be submitted. Deleting plug for buffered I/O fixes the regression.
> > 
> > One side effect is plug makes the request size 256k, the size is 128k
> > without it. This is because default ra size is 128k and not a reason we
> > need plug here.
> > 
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > Tested-by: Herbert Poetzl <herbert@13thfloor.at>
> > Tested-by: Eric Dumazet <eric.dumazet@gmail.com>
> 
> Hmm, this is not exactly the patch I tested from Wu Fengguang 
> 
> I'll test this one before adding my "Tested-by: ..."
That added lines should not matter. We still need plug for direct-io
case.
Really sorry for this, I should ask you test it before adding the
Tested-by.

Thanks,
Shaohua


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  8:53   ` Shaohua Li
@ 2012-01-31  9:17     ` Eric Dumazet
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Dumazet @ 2012-01-31  9:17 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Vivek Goyal, Wu Fengguang

Le mardi 31 janvier 2012 à 16:53 +0800, Shaohua Li a écrit :

> That added lines should not matter. We still need plug for direct-io
> case.
> Really sorry for this, I should ask you test it before adding the
> Tested-by.

No problem, but I prefer to test it in its final form before adding a TB

I did the test right now and everything seems fine to me, thanks !

Tested-by: Eric Dumazet <eric.dumazet@gmail.com>




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
  2012-01-31  8:36 ` Christoph Hellwig
  2012-01-31  8:48 ` Eric Dumazet
@ 2012-01-31 10:20 ` Wu Fengguang
  2012-01-31 10:34 ` Wu Fengguang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Wu Fengguang @ 2012-01-31 10:20 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 03:59:40PM +0800, Li, Shaohua wrote:
> Herbert Poetzl reported a performance regression since 2.6.39.

It helps to point out the exact commit that caused the regression.

 commit 55602dd66f535 ("fs: make generic file read/write functions plug")

> The test
> is a simple dd read, but with big block size. The reason is:
> 
> T1: ra (A, A+128k), (A+128k, A+256k)
> T2: lock_page for page A, submit the 256k
> T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> because of plug and there isn't any lock_page till we hit page A+256k
> because all pages from A to A+256k is in memory
> T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> submitted again.
> T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> waitting for (A+256k, A+512k) finish.
> 
> There is no request to disk in T3 and T4, so readahead pipeline breaks.

s/in/between/

> We really don't need block plug for generic_file_aio_read() for buffered
> I/O. The readahead already has plug and has fine grained control when I/O
> should be submitted. Deleting plug for buffered I/O fixes the regression.

Eric and Herbert have good performance numbers and blktrace data, it
would be good to include some of them for demonstrating this patch's
impact on both behavior and throughput :)

Thanks,
Fengguang

> One side effect is plug makes the request size 256k, the size is 128k
> without it. This is because default ra size is 128k and not a reason we
> need plug here.
> 
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> Tested-by: Herbert Poetzl <herbert@13thfloor.at>
> Tested-by: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 97f49ed..b662757 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1400,15 +1400,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>  	unsigned long seg = 0;
>  	size_t count;
>  	loff_t *ppos = &iocb->ki_pos;
> -	struct blk_plug plug;
>  
>  	count = 0;
>  	retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
>  	if (retval)
>  		return retval;
>  
> -	blk_start_plug(&plug);
> -
>  	/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
>  	if (filp->f_flags & O_DIRECT) {
>  		loff_t size;
> @@ -1424,8 +1421,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>  			retval = filemap_write_and_wait_range(mapping, pos,
>  					pos + iov_length(iov, nr_segs) - 1);
>  			if (!retval) {
> +				struct blk_plug plug;
> +
> +				blk_start_plug(&plug);
>  				retval = mapping->a_ops->direct_IO(READ, iocb,
>  							iov, pos, nr_segs);
> +				blk_finish_plug(&plug);
>  			}
>  			if (retval > 0) {
>  				*ppos = pos + retval;
> @@ -1481,7 +1482,6 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
>  			break;
>  	}
>  out:
> -	blk_finish_plug(&plug);
>  	return retval;
>  }
>  EXPORT_SYMBOL(generic_file_aio_read);
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
                   ` (2 preceding siblings ...)
  2012-01-31 10:20 ` Wu Fengguang
@ 2012-01-31 10:34 ` Wu Fengguang
  2012-01-31 10:46   ` Christoph Hellwig
  2012-02-01  2:25   ` Shaohua Li
  2012-01-31 14:47 ` Vivek Goyal
  2012-01-31 22:03 ` Vivek Goyal
  5 siblings, 2 replies; 28+ messages in thread
From: Wu Fengguang @ 2012-01-31 10:34 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Vivek Goyal

I'd like to propose a sister patch on the write part. It may not be
as easy to measure any performance impacts of it, but I'll try.

---
Subject: remove plugging at buffered write time 
Date: Tue Jan 31 18:25:48 CST 2012

Buffered write(2) is not directly tied to IO, so no need to handle plug
in generic_file_aio_write().

CC: Jens Axboe <axboe@kernel.dk>
CC: Li Shaohua <shaohua.li@intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 mm/filemap.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- linux-next.orig/mm/filemap.c	2012-01-31 18:23:52.000000000 +0800
+++ linux-next/mm/filemap.c	2012-01-31 18:25:38.000000000 +0800
@@ -2267,6 +2267,7 @@ generic_file_direct_write(struct kiocb *
 	struct file	*file = iocb->ki_filp;
 	struct address_space *mapping = file->f_mapping;
 	struct inode	*inode = mapping->host;
+	struct blk_plug plug;
 	ssize_t		written;
 	size_t		write_len;
 	pgoff_t		end;
@@ -2301,7 +2302,9 @@ generic_file_direct_write(struct kiocb *
 		}
 	}
 
+	blk_start_plug(&plug);
 	written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
+	blk_finish_plug(&plug);
 
 	/*
 	 * Finally, try again to invalidate clean pages which might have been
@@ -2610,13 +2613,11 @@ ssize_t generic_file_aio_write(struct ki
 {
 	struct file *file = iocb->ki_filp;
 	struct inode *inode = file->f_mapping->host;
-	struct blk_plug plug;
 	ssize_t ret;
 
 	BUG_ON(iocb->ki_pos != pos);
 
 	mutex_lock(&inode->i_mutex);
-	blk_start_plug(&plug);
 	ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
 	mutex_unlock(&inode->i_mutex);
 
@@ -2627,7 +2628,6 @@ ssize_t generic_file_aio_write(struct ki
 		if (err < 0 && ret > 0)
 			ret = err;
 	}
-	blk_finish_plug(&plug);
 	return ret;
 }
 EXPORT_SYMBOL(generic_file_aio_write);

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 10:34 ` Wu Fengguang
@ 2012-01-31 10:46   ` Christoph Hellwig
  2012-01-31 10:57     ` Wu Fengguang
  2012-02-01  2:25   ` Shaohua Li
  1 sibling, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2012-01-31 10:46 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Shaohua Li, lkml, linux-mm, Andrew Morton, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 06:34:16PM +0800, Wu Fengguang wrote:
>  	}
>  
> +	blk_start_plug(&plug);
>  	written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
> +	blk_finish_plug(&plug);

Please move the plugging into ->direct_IO for both read and write, as
that is the boundary between generic highlevel code, and low-level block
code that should know about plugs.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 10:46   ` Christoph Hellwig
@ 2012-01-31 10:57     ` Wu Fengguang
  2012-01-31 11:34       ` Christoph Hellwig
  0 siblings, 1 reply; 28+ messages in thread
From: Wu Fengguang @ 2012-01-31 10:57 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Shaohua Li, lkml, linux-mm, Andrew Morton, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 05:46:21AM -0500, Christoph Hellwig wrote:
> On Tue, Jan 31, 2012 at 06:34:16PM +0800, Wu Fengguang wrote:
> >  	}
> >  
> > +	blk_start_plug(&plug);
> >  	written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
> > +	blk_finish_plug(&plug);
> 
> Please move the plugging into ->direct_IO for both read and write, as
> that is the boundary between generic highlevel code, and low-level block
> code that should know about plugs.

The problem is, there are a dozen of ->direct_IO callback functions.
While there are only two ->direct_IO() callers, one for READ and
another for WRITE, which are much easier to deal with.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 10:57     ` Wu Fengguang
@ 2012-01-31 11:34       ` Christoph Hellwig
  2012-01-31 11:42         ` Wu Fengguang
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2012-01-31 11:34 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Shaohua Li, lkml, linux-mm, Andrew Morton,
	Jens Axboe, Herbert Poetzl, Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 06:57:54PM +0800, Wu Fengguang wrote:
> The problem is, there are a dozen of ->direct_IO callback functions.
> While there are only two ->direct_IO() callers, one for READ and
> another for WRITE, which are much easier to deal with.

So what?  Better do a bit more work now and keep the damn thing
maintainable.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 11:34       ` Christoph Hellwig
@ 2012-01-31 11:42         ` Wu Fengguang
  2012-01-31 11:57           ` Christoph Hellwig
  0 siblings, 1 reply; 28+ messages in thread
From: Wu Fengguang @ 2012-01-31 11:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Shaohua Li, lkml, linux-mm, Andrew Morton, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 06:34:52AM -0500, Christoph Hellwig wrote:
> On Tue, Jan 31, 2012 at 06:57:54PM +0800, Wu Fengguang wrote:
> > The problem is, there are a dozen of ->direct_IO callback functions.
> > While there are only two ->direct_IO() callers, one for READ and
> > another for WRITE, which are much easier to deal with.
> 
> So what?  Better do a bit more work now and keep the damn thing
> maintainable.

What if we add a wrapper function for doing

        blk_start_plug(&plug);
        ->direct_IO()
        blk_finish_plug(&plug);

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 11:42         ` Wu Fengguang
@ 2012-01-31 11:57           ` Christoph Hellwig
  2012-01-31 12:20             ` Wu Fengguang
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2012-01-31 11:57 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Shaohua Li, lkml, linux-mm, Andrew Morton,
	Jens Axboe, Herbert Poetzl, Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 07:42:56PM +0800, Wu Fengguang wrote:
> > So what?  Better do a bit more work now and keep the damn thing
> > maintainable.
> 
> What if we add a wrapper function for doing
> 
>         blk_start_plug(&plug);
>         ->direct_IO()
>         blk_finish_plug(&plug);

No.  Just put it into __blockdev_direct_IO - a quick 2 minute audit of
the kernel source shows that is in fact the only ->direct_IO instance
which ever submits block I/O anyway.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 11:57           ` Christoph Hellwig
@ 2012-01-31 12:20             ` Wu Fengguang
  0 siblings, 0 replies; 28+ messages in thread
From: Wu Fengguang @ 2012-01-31 12:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Shaohua Li, lkml, linux-mm, Andrew Morton, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Vivek Goyal

On Tue, Jan 31, 2012 at 06:57:16AM -0500, Christoph Hellwig wrote:
> On Tue, Jan 31, 2012 at 07:42:56PM +0800, Wu Fengguang wrote:
> > > So what?  Better do a bit more work now and keep the damn thing
> > > maintainable.
> > 
> > What if we add a wrapper function for doing
> > 
> >         blk_start_plug(&plug);
> >         ->direct_IO()
> >         blk_finish_plug(&plug);
> 
> No.  Just put it into __blockdev_direct_IO - a quick 2 minute audit of
> the kernel source shows that is in fact the only ->direct_IO instance
> which ever submits block I/O anyway.

Right, so nice point!

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
                   ` (3 preceding siblings ...)
  2012-01-31 10:34 ` Wu Fengguang
@ 2012-01-31 14:47 ` Vivek Goyal
  2012-01-31 20:23   ` Vivek Goyal
  2012-01-31 22:03 ` Vivek Goyal
  5 siblings, 1 reply; 28+ messages in thread
From: Vivek Goyal @ 2012-01-31 14:47 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Wu Fengguang

On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> Herbert Poetzl reported a performance regression since 2.6.39. The test
> is a simple dd read, but with big block size. The reason is:
> 
> T1: ra (A, A+128k), (A+128k, A+256k)
> T2: lock_page for page A, submit the 256k
> T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> because of plug and there isn't any lock_page till we hit page A+256k
> because all pages from A to A+256k is in memory
> T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> submitted again.

Why IO is not submitted because of plug? Doesn't task now get scheduled
out causing an unplug? IOW, are we now busy waiting somewhere preventing
unplug?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 14:47 ` Vivek Goyal
@ 2012-01-31 20:23   ` Vivek Goyal
  0 siblings, 0 replies; 28+ messages in thread
From: Vivek Goyal @ 2012-01-31 20:23 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Wu Fengguang

On Tue, Jan 31, 2012 at 09:47:34AM -0500, Vivek Goyal wrote:
> On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> > Herbert Poetzl reported a performance regression since 2.6.39. The test
> > is a simple dd read, but with big block size. The reason is:
> > 
> > T1: ra (A, A+128k), (A+128k, A+256k)
> > T2: lock_page for page A, submit the 256k
> > T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> > because of plug and there isn't any lock_page till we hit page A+256k
> > because all pages from A to A+256k is in memory
> > T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> > submitted again.
> 
> Why IO is not submitted because of plug? Doesn't task now get scheduled
> out causing an unplug? IOW, are we now busy waiting somewhere preventing
> unplug?

Ok, after putting some trace points I think now I understand what is
happening.

We submit some readahead IO to device request queue but because of nested
plug, queue never gets unplugged. When read logic reaches a page which is
not in page cache, it waits for page to be read from the disk
(lock_page_killable()) and that time we flush the plug list.

So effectively read ahead logic is kind of broken in parts because of
nested plugging. Removing top level plug (generic_file_aio_read()) for
buffered reads, will allow unplugging queue earlier for readahead.

Thanks
Vivek 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
                   ` (4 preceding siblings ...)
  2012-01-31 14:47 ` Vivek Goyal
@ 2012-01-31 22:03 ` Vivek Goyal
  2012-01-31 22:13   ` Andrew Morton
  2012-02-01  7:02   ` Wu Fengguang
  5 siblings, 2 replies; 28+ messages in thread
From: Vivek Goyal @ 2012-01-31 22:03 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Wu Fengguang

On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> Herbert Poetzl reported a performance regression since 2.6.39. The test
> is a simple dd read, but with big block size. The reason is:
> 
> T1: ra (A, A+128k), (A+128k, A+256k)
> T2: lock_page for page A, submit the 256k
> T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> because of plug and there isn't any lock_page till we hit page A+256k
> because all pages from A to A+256k is in memory
> T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> submitted again.
> T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> waitting for (A+256k, A+512k) finish.
> 
> There is no request to disk in T3 and T4, so readahead pipeline breaks.
> 
> We really don't need block plug for generic_file_aio_read() for buffered
> I/O. The readahead already has plug and has fine grained control when I/O
> should be submitted. Deleting plug for buffered I/O fixes the regression.
> 
> One side effect is plug makes the request size 256k, the size is 128k
> without it. This is because default ra size is 128k and not a reason we
> need plug here.

For me, this patch helps only so much and does not get back all the
performance lost in case of raw disk read. It does improve the throughput
from around 85-90 MB/s to 110-120 MB/s but running the same dd with
iflag=direct, gets me more than 250MB/s.

# echo 3 > /proc/sys/vm/drop_caches 
# dd if=/dev/sdb of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.03305 s, 119 MB/s

echo 3 > /proc/sys/vm/drop_caches 
# dd if=/dev/sdb of=/dev/null bs=1M count=1K iflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.07426 s, 264 MB/s

I think it is happening because in case of raw read we are submitting
one page at a time to request queue and by the time all the pages
are submitted and one big merged request is formed it wates lot of time.

In case of direct IO, we are getting bigger IOs at request queue so
less cpu overhead, less idling on queue.

I created ext4 filesystem on same SSD and did the buffered read and
that seems to work just fine. Now I am getting bigger requests at
the request queue. (128K, 256 sectors).

[root@chilli common]# echo 3 > /proc/sys/vm/drop_caches 
[root@chilli common]# dd if=zerofile-4G of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.09186 s, 262 MB/s

Anyway, remvoing top level plug in case of buffered reads sounds
reasonable.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 22:03 ` Vivek Goyal
@ 2012-01-31 22:13   ` Andrew Morton
  2012-01-31 22:22     ` Vivek Goyal
  2012-02-01  7:02   ` Wu Fengguang
  1 sibling, 1 reply; 28+ messages in thread
From: Andrew Morton @ 2012-01-31 22:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Shaohua Li, lkml, linux-mm, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Wu Fengguang

On Tue, 31 Jan 2012 17:03:33 -0500
Vivek Goyal <vgoyal@redhat.com> wrote:

> On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> > Herbert Poetzl reported a performance regression since 2.6.39. The test
> > is a simple dd read, but with big block size. The reason is:
> > 
> > T1: ra (A, A+128k), (A+128k, A+256k)
> > T2: lock_page for page A, submit the 256k
> > T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> > because of plug and there isn't any lock_page till we hit page A+256k
> > because all pages from A to A+256k is in memory
> > T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> > submitted again.
> > T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> > waitting for (A+256k, A+512k) finish.
> > 
> > There is no request to disk in T3 and T4, so readahead pipeline breaks.
> > 
> > We really don't need block plug for generic_file_aio_read() for buffered
> > I/O. The readahead already has plug and has fine grained control when I/O
> > should be submitted. Deleting plug for buffered I/O fixes the regression.
> > 
> > One side effect is plug makes the request size 256k, the size is 128k
> > without it. This is because default ra size is 128k and not a reason we
> > need plug here.
> 
> For me, this patch helps only so much and does not get back all the
> performance lost in case of raw disk read. It does improve the throughput
> from around 85-90 MB/s to 110-120 MB/s but running the same dd with
> iflag=direct, gets me more than 250MB/s.
> 
> # echo 3 > /proc/sys/vm/drop_caches 
> # dd if=/dev/sdb of=/dev/null bs=1M count=1K
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 9.03305 s, 119 MB/s
> 
> echo 3 > /proc/sys/vm/drop_caches 
> # dd if=/dev/sdb of=/dev/null bs=1M count=1K iflag=direct
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 4.07426 s, 264 MB/s

Buffered I/O against the block device has a tradition of doing Weird
Things.  Do you see the same behavior when reading from a regular file?

> I think it is happening because in case of raw read we are submitting
> one page at a time to request queue

(That's not a raw read - it's using pagecache.  Please get the terms right!)

We've never really bothered making the /dev/sda[X] I/O very efficient
for large I/O's under the (probably wrong) assumption that it isn't a
very interesting case.  Regular files will (or should) use the mpage
functions, via address_space_operations.readpages().  fs/blockdev.c
doesn't even implement it.

> and by the time all the pages
> are submitted and one big merged request is formed it wates lot of time.

But that was the case in eariler kernels too.  Why did it change?



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 22:13   ` Andrew Morton
@ 2012-01-31 22:22     ` Vivek Goyal
  2012-02-01  3:36       ` Vivek Goyal
  0 siblings, 1 reply; 28+ messages in thread
From: Vivek Goyal @ 2012-01-31 22:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Shaohua Li, lkml, linux-mm, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Wu Fengguang

On Tue, Jan 31, 2012 at 02:13:01PM -0800, Andrew Morton wrote:

[..]
> > For me, this patch helps only so much and does not get back all the
> > performance lost in case of raw disk read. It does improve the throughput
> > from around 85-90 MB/s to 110-120 MB/s but running the same dd with
> > iflag=direct, gets me more than 250MB/s.
> > 
> > # echo 3 > /proc/sys/vm/drop_caches 
> > # dd if=/dev/sdb of=/dev/null bs=1M count=1K
> > 1024+0 records in
> > 1024+0 records out
> > 1073741824 bytes (1.1 GB) copied, 9.03305 s, 119 MB/s
> > 
> > echo 3 > /proc/sys/vm/drop_caches 
> > # dd if=/dev/sdb of=/dev/null bs=1M count=1K iflag=direct
> > 1024+0 records in
> > 1024+0 records out
> > 1073741824 bytes (1.1 GB) copied, 4.07426 s, 264 MB/s
> 
> Buffered I/O against the block device has a tradition of doing Weird
> Things.  Do you see the same behavior when reading from a regular file?

No. Reading file on ext4 file system is working just fine.

> 
> > I think it is happening because in case of raw read we are submitting
> > one page at a time to request queue
> 
> (That's not a raw read - it's using pagecache.  Please get the terms right!)

Ok.

> 
> We've never really bothered making the /dev/sda[X] I/O very efficient
> for large I/O's under the (probably wrong) assumption that it isn't a
> very interesting case.  Regular files will (or should) use the mpage
> functions, via address_space_operations.readpages().  fs/blockdev.c
> doesn't even implement it.
> 
> > and by the time all the pages
> > are submitted and one big merged request is formed it wates lot of time.
> 
> But that was the case in eariler kernels too.  Why did it change?

Actually, I assumed that the case of reading /dev/sda[X] worked well in
earlier kernels. Sorry about that. Will build a 2.6.38 kernel tonight
and run the test case again to make sure we had same overhead and
relatively poor performance while reading /dev/sda[X].

I think I got confused with Eric's result in another mail where he was
reading /dev/sda and getting around 265MB/s with plug removed. And I was
wondering that why am I not getting same results.

# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sdb of=/dev/null bs=2M
# count=2048
2048+0 enregistrements lus
2048+0 enregistrements écrits
4294967296 octets (4,3 GB) copiés, 16,2309 s, 265 MB/s

Maybe something to do with SSD. I will test it anyway with older kernel.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 10:34 ` Wu Fengguang
  2012-01-31 10:46   ` Christoph Hellwig
@ 2012-02-01  2:25   ` Shaohua Li
  1 sibling, 0 replies; 28+ messages in thread
From: Shaohua Li @ 2012-02-01  2:25 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: lkml, linux-mm, Andrew Morton, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Vivek Goyal

On Tue, 2012-01-31 at 18:34 +0800, Wu Fengguang wrote:
> I'd like to propose a sister patch on the write part. It may not be
> as easy to measure any performance impacts of it, but I'll try.
I did think about it, the write case doesn't matter because we didn't do
real IO there.
I can do another patch to cleanup the code (moving it to direct_io),
sounds ok?

> ---
> Subject: remove plugging at buffered write time 
> Date: Tue Jan 31 18:25:48 CST 2012
> 
> Buffered write(2) is not directly tied to IO, so no need to handle plug
> in generic_file_aio_write().
> 
> CC: Jens Axboe <axboe@kernel.dk>
> CC: Li Shaohua <shaohua.li@intel.com>
> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> ---
>  mm/filemap.c |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> --- linux-next.orig/mm/filemap.c	2012-01-31 18:23:52.000000000 +0800
> +++ linux-next/mm/filemap.c	2012-01-31 18:25:38.000000000 +0800
> @@ -2267,6 +2267,7 @@ generic_file_direct_write(struct kiocb *
>  	struct file	*file = iocb->ki_filp;
>  	struct address_space *mapping = file->f_mapping;
>  	struct inode	*inode = mapping->host;
> +	struct blk_plug plug;
>  	ssize_t		written;
>  	size_t		write_len;
>  	pgoff_t		end;
> @@ -2301,7 +2302,9 @@ generic_file_direct_write(struct kiocb *
>  		}
>  	}
>  
> +	blk_start_plug(&plug);
>  	written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
> +	blk_finish_plug(&plug);
>  
>  	/*
>  	 * Finally, try again to invalidate clean pages which might have been
> @@ -2610,13 +2613,11 @@ ssize_t generic_file_aio_write(struct ki
>  {
>  	struct file *file = iocb->ki_filp;
>  	struct inode *inode = file->f_mapping->host;
> -	struct blk_plug plug;
>  	ssize_t ret;
>  
>  	BUG_ON(iocb->ki_pos != pos);
>  
>  	mutex_lock(&inode->i_mutex);
> -	blk_start_plug(&plug);
>  	ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
>  	mutex_unlock(&inode->i_mutex);
>  
> @@ -2627,7 +2628,6 @@ ssize_t generic_file_aio_write(struct ki
>  		if (err < 0 && ret > 0)
>  			ret = err;
>  	}
> -	blk_finish_plug(&plug);
>  	return ret;
>  }
>  EXPORT_SYMBOL(generic_file_aio_write);



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 22:22     ` Vivek Goyal
@ 2012-02-01  3:36       ` Vivek Goyal
  2012-02-01  7:10         ` Wu Fengguang
  2012-02-01  9:18         ` Christoph Hellwig
  0 siblings, 2 replies; 28+ messages in thread
From: Vivek Goyal @ 2012-02-01  3:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Shaohua Li, lkml, linux-mm, Jens Axboe, Herbert Poetzl,
	Eric Dumazet, Wu Fengguang

On Tue, Jan 31, 2012 at 05:22:17PM -0500, Vivek Goyal wrote:
[..]

> > 
> > We've never really bothered making the /dev/sda[X] I/O very efficient
> > for large I/O's under the (probably wrong) assumption that it isn't a
> > very interesting case.  Regular files will (or should) use the mpage
> > functions, via address_space_operations.readpages().  fs/blockdev.c
> > doesn't even implement it.
> > 
> > > and by the time all the pages
> > > are submitted and one big merged request is formed it wates lot of time.
> > 
> > But that was the case in eariler kernels too.  Why did it change?
> 
> Actually, I assumed that the case of reading /dev/sda[X] worked well in
> earlier kernels. Sorry about that. Will build a 2.6.38 kernel tonight
> and run the test case again to make sure we had same overhead and
> relatively poor performance while reading /dev/sda[X].

Ok, I tried it with 2.6.38 kernel and results look more or less same.
Throughput varied between 105MB to 145MB. Many a times it was close to
110MB and other times it was 145MB. Don't know what causes that spike
sometimes.

I still see that IO is being submitted one page at a time. The only
real difference seems to be that queue unplug happening at random times
and many a times we are submitting much smaller requests (40 sectors, 48
sectors etc).

Thanks
Vivek

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-01-31 22:03 ` Vivek Goyal
  2012-01-31 22:13   ` Andrew Morton
@ 2012-02-01  7:02   ` Wu Fengguang
  1 sibling, 0 replies; 28+ messages in thread
From: Wu Fengguang @ 2012-02-01  7:02 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Shaohua Li, lkml, linux-mm, Andrew Morton, Jens Axboe,
	Herbert Poetzl, Eric Dumazet

On Tue, Jan 31, 2012 at 05:03:33PM -0500, Vivek Goyal wrote:
> On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> > Herbert Poetzl reported a performance regression since 2.6.39. The test
> > is a simple dd read, but with big block size. The reason is:
> > 
> > T1: ra (A, A+128k), (A+128k, A+256k)
> > T2: lock_page for page A, submit the 256k
> > T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> > because of plug and there isn't any lock_page till we hit page A+256k
> > because all pages from A to A+256k is in memory
> > T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> > submitted again.
> > T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
> > waitting for (A+256k, A+512k) finish.
> > 
> > There is no request to disk in T3 and T4, so readahead pipeline breaks.
> > 
> > We really don't need block plug for generic_file_aio_read() for buffered
> > I/O. The readahead already has plug and has fine grained control when I/O
> > should be submitted. Deleting plug for buffered I/O fixes the regression.
> > 
> > One side effect is plug makes the request size 256k, the size is 128k
> > without it. This is because default ra size is 128k and not a reason we
> > need plug here.
> 
> For me, this patch helps only so much and does not get back all the
> performance lost in case of raw disk read. It does improve the throughput
> from around 85-90 MB/s to 110-120 MB/s but running the same dd with
> iflag=direct, gets me more than 250MB/s.
> 
> # echo 3 > /proc/sys/vm/drop_caches 
> # dd if=/dev/sdb of=/dev/null bs=1M count=1K
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 9.03305 s, 119 MB/s
> 
> echo 3 > /proc/sys/vm/drop_caches 
> # dd if=/dev/sdb of=/dev/null bs=1M count=1K iflag=direct
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 4.07426 s, 264 MB/s
> 
> I think it is happening because in case of raw read we are submitting
> one page at a time to request queue and by the time all the pages
> are submitted and one big merged request is formed it wates lot of time.
> 
> In case of direct IO, we are getting bigger IOs at request queue so
> less cpu overhead, less idling on queue.

Note that "dd bs=1M" will result in 128KB readahead IO. The buffered
dd reads may perform much better if 1MB readahead size is used:

blockdev --setra 2048 /dev/sda

> I created ext4 filesystem on same SSD and did the buffered read and
> that seems to work just fine. Now I am getting bigger requests at
> the request queue. (128K, 256 sectors).
> 
> [root@chilli common]# echo 3 > /proc/sys/vm/drop_caches 
> [root@chilli common]# dd if=zerofile-4G of=/dev/null bs=1M count=1K
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 4.09186 s, 262 MB/s

So the raw sda reads have some performance problems. What's the exact
blktrace sequence for sda reads? And the block size?

blockdev --getbsz /dev/sda               

> Anyway, remvoing top level plug in case of buffered reads sounds
> reasonable.

Yup.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-02-01  3:36       ` Vivek Goyal
@ 2012-02-01  7:10         ` Wu Fengguang
  2012-02-01 16:01           ` Vivek Goyal
  2012-02-01  9:18         ` Christoph Hellwig
  1 sibling, 1 reply; 28+ messages in thread
From: Wu Fengguang @ 2012-02-01  7:10 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Andrew Morton, Shaohua Li, lkml, linux-mm, Jens Axboe,
	Herbert Poetzl, Eric Dumazet

On Tue, Jan 31, 2012 at 10:36:53PM -0500, Vivek Goyal wrote:
> On Tue, Jan 31, 2012 at 05:22:17PM -0500, Vivek Goyal wrote:
> [..]
> 
> > > 
> > > We've never really bothered making the /dev/sda[X] I/O very efficient
> > > for large I/O's under the (probably wrong) assumption that it isn't a
> > > very interesting case.  Regular files will (or should) use the mpage
> > > functions, via address_space_operations.readpages().  fs/blockdev.c
> > > doesn't even implement it.
> > > 
> > > > and by the time all the pages
> > > > are submitted and one big merged request is formed it wates lot of time.
> > > 
> > > But that was the case in eariler kernels too.  Why did it change?
> > 
> > Actually, I assumed that the case of reading /dev/sda[X] worked well in
> > earlier kernels. Sorry about that. Will build a 2.6.38 kernel tonight
> > and run the test case again to make sure we had same overhead and
> > relatively poor performance while reading /dev/sda[X].
> 
> Ok, I tried it with 2.6.38 kernel and results look more or less same.
> Throughput varied between 105MB to 145MB. Many a times it was close to
> 110MB and other times it was 145MB. Don't know what causes that spike
> sometimes.

The block device really has some aged performance bug. Which
interestingly only show up in some test environments...

> I still see that IO is being submitted one page at a time. The only
> real difference seems to be that queue unplug happening at random times
> and many a times we are submitting much smaller requests (40 sectors, 48
> sectors etc).

Would you share the blktrace data?

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-02-01  3:36       ` Vivek Goyal
  2012-02-01  7:10         ` Wu Fengguang
@ 2012-02-01  9:18         ` Christoph Hellwig
  2012-02-01 20:10           ` Vivek Goyal
  1 sibling, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2012-02-01  9:18 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Andrew Morton, Shaohua Li, lkml, linux-mm, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Wu Fengguang

On Tue, Jan 31, 2012 at 10:36:53PM -0500, Vivek Goyal wrote:
> I still see that IO is being submitted one page at a time. The only
> real difference seems to be that queue unplug happening at random times
> and many a times we are submitting much smaller requests (40 sectors, 48
> sectors etc).

This is expected given that the block device node uses
block_read_full_page, and not mpage_readpage(s).

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-02-01  7:10         ` Wu Fengguang
@ 2012-02-01 16:01           ` Vivek Goyal
  0 siblings, 0 replies; 28+ messages in thread
From: Vivek Goyal @ 2012-02-01 16:01 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Andrew Morton, Shaohua Li, lkml, linux-mm, Jens Axboe,
	Herbert Poetzl, Eric Dumazet

On Wed, Feb 01, 2012 at 03:10:00PM +0800, Wu Fengguang wrote:

[..]
> > I still see that IO is being submitted one page at a time. The only
> > real difference seems to be that queue unplug happening at random times
> > and many a times we are submitting much smaller requests (40 sectors, 48
> > sectors etc).
> 
> Would you share the blktrace data?

Sure. Here is last 1000 lines of blkparse data. This is with 2.6.38 kernel
doing following.

# echo 3 > /proc/sys/vm/drop_caches 
# dd if=/dev/sdb of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.0954 s, 106 MB/s

 
  8,16   3    61229     3.955827295 10003  Q   R 1074312 + 8 [dd]
  8,16   3    61230     3.955827703 10003  M   R 1074312 + 8 [dd]
  8,16   1    71037     3.955832289     0  D   R 1074304 + 16 [kworker/0:0]
  8,16   3    61231     3.955855800 10003  Q   R 1074320 + 8 [dd]
  8,16   1    71038     3.955860011     0  C   R 1074272 + 16 [0]
  8,16   3    61232     3.955864940 10003  G   R 1074320 + 8 [dd]
  8,16   3    61233     3.955866074 10003  P   N [dd]
  8,16   3    61234     3.955866515 10003  I   R 1074320 + 8 [dd]
  8,16   3    61235     3.955885658 10003  Q   R 1074328 + 8 [dd]
  8,16   3    61236     3.955886135 10003  M   R 1074328 + 8 [dd]
  8,16   1    71039     3.955888321     0  D   R 1074320 + 16 [kworker/0:0]
  8,16   3    61237     3.955911727 10003  Q   R 1074336 + 8 [dd]
  8,16   1    71040     3.955916016     0  C   R 1074288 + 16 [0]
  8,16   3    61238     3.955920681 10003  G   R 1074336 + 8 [dd]
  8,16   3    61239     3.955921848 10003  P   N [dd]
  8,16   3    61240     3.955922274 10003  I   R 1074336 + 8 [dd]
  8,16   3    61241     3.955938894 10003  Q   R 1074344 + 8 [dd]
  8,16   3    61242     3.955939395 10003  M   R 1074344 + 8 [dd]
  8,16   1    71041     3.955943253     0  D   R 1074336 + 16 [kworker/0:0]
  8,16   3    61243     3.955966616 10003  Q   R 1074352 + 8 [dd]
  8,16   1    71042     3.955970969     0  C   R 1074304 + 16 [0]
  8,16   3    61244     3.955975804 10003  G   R 1074352 + 8 [dd]
  8,16   3    61245     3.955976845 10003  P   N [dd]
  8,16   3    61246     3.955977427 10003  I   R 1074352 + 8 [dd]
  8,16   3    61247     3.955993510 10003  Q   R 1074360 + 8 [dd]
  8,16   3    61248     3.955993942 10003  M   R 1074360 + 8 [dd]
  8,16   1    71043     3.955998313     0  D   R 1074352 + 16 [kworker/0:0]
  8,16   3    61249     3.956024292 10003  Q   R 1074368 + 8 [dd]
  8,16   1    71044     3.956028659     0  C   R 1074320 + 16 [0]
  8,16   3    61250     3.956033234 10003  G   R 1074368 + 8 [dd]
  8,16   3    61251     3.956034407 10003  P   N [dd]
  8,16   3    61252     3.956034899 10003  I   R 1074368 + 8 [dd]
  8,16   3    61253     3.956051600 10003  Q   R 1074376 + 8 [dd]
  8,16   3    61254     3.956052122 10003  M   R 1074376 + 8 [dd]
  8,16   1    71045     3.956057614     0  D   R 1074368 + 16 [kworker/0:0]
  8,16   3    61255     3.956081230 10003  Q   R 1074384 + 8 [dd]
  8,16   1    71046     3.956085492     0  C   R 1074336 + 16 [0]
  8,16   3    61256     3.956090301 10003  G   R 1074384 + 8 [dd]
  8,16   3    61257     3.956091465 10003  P   N [dd]
  8,16   3    61258     3.956091882 10003  I   R 1074384 + 8 [dd]
  8,16   3    61259     3.956108808 10003  Q   R 1074392 + 8 [dd]
  8,16   3    61260     3.956109297 10003  M   R 1074392 + 8 [dd]
  8,16   1    71047     3.956112939     0  D   R 1074384 + 16 [kworker/0:0]
  8,16   3    61261     3.956138705 10003  Q   R 1074400 + 8 [dd]
  8,16   1    71048     3.956143018     0  C   R 1074352 + 16 [0]
  8,16   3    61262     3.956147710 10003  G   R 1074400 + 8 [dd]
  8,16   3    61263     3.956148886 10003  P   N [dd]
  8,16   3    61264     3.956149468 10003  I   R 1074400 + 8 [dd]
  8,16   3    61265     3.956166091 10003  Q   R 1074408 + 8 [dd]
  8,16   3    61266     3.956166553 10003  M   R 1074408 + 8 [dd]
  8,16   1    71049     3.956170398     0  D   R 1074400 + 16 [kworker/0:0]
  8,16   3    61267     3.956195193 10003  Q   R 1074416 + 8 [dd]
  8,16   1    71050     3.956199524     0  C   R 1074368 + 16 [0]
  8,16   3    61268     3.956204951 10003  G   R 1074416 + 8 [dd]
  8,16   3    61269     3.956206112 10003  P   N [dd]
  8,16   3    61270     3.956206691 10003  I   R 1074416 + 8 [dd]
  8,16   3    61271     3.956224916 10003  Q   R 1074424 + 8 [dd]
  8,16   3    61272     3.956225381 10003  M   R 1074424 + 8 [dd]
  8,16   3    61273     3.956226136 10003  U   N [dd] 3
  8,16   1    71051     3.956227498     0  D   R 1074416 + 16 [kworker/0:0]
  8,16   1    71052     3.956259216     0  C   R 1074384 + 16 [0]
  8,16   1    71053     3.956288327     0  C   R 1074400 + 16 [0]
  8,16   1    71054     3.956322157     9  C   R 1074416 + 16 [0]
  8,16   3    61274     3.956373017 10003  Q   R 1074432 + 8 [dd]
  8,16   3    61275     3.956382596 10003  G   R 1074432 + 8 [dd]
  8,16   3    61276     3.956383613 10003  P   N [dd]
  8,16   3    61277     3.956384153 10003  I   R 1074432 + 8 [dd]
  8,16   3    61278     3.956401726 10003  Q   R 1074440 + 8 [dd]
  8,16   3    61279     3.956402320 10003  M   R 1074440 + 8 [dd]
  8,16   3    61280     3.956418277 10003  Q   R 1074448 + 8 [dd]
  8,16   3    61281     3.956418664 10003  M   R 1074448 + 8 [dd]
  8,16   3    61282     3.956434548 10003  Q   R 1074456 + 8 [dd]
  8,16   3    61283     3.956434923 10003  M   R 1074456 + 8 [dd]
  8,16   3    61284     3.956450772 10003  Q   R 1074464 + 8 [dd]
  8,16   3    61285     3.956451147 10003  M   R 1074464 + 8 [dd]
  8,16   3    61286     3.956467052 10003  Q   R 1074472 + 8 [dd]
  8,16   3    61287     3.956467418 10003  M   R 1074472 + 8 [dd]
  8,16   3    61288     3.956482936 10003  Q   R 1074480 + 8 [dd]
  8,16   3    61289     3.956483314 10003  M   R 1074480 + 8 [dd]
  8,16   3    61290     3.956499163 10003  Q   R 1074488 + 8 [dd]
  8,16   3    61291     3.956499526 10003  M   R 1074488 + 8 [dd]
  8,16   3    61292     3.956515617 10003  Q   R 1074496 + 8 [dd]
  8,16   3    61293     3.956515983 10003  M   R 1074496 + 8 [dd]
  8,16   3    61294     3.956531385 10003  Q   R 1074504 + 8 [dd]
  8,16   3    61295     3.956531742 10003  M   R 1074504 + 8 [dd]
  8,16   3    61296     3.956547755 10003  Q   R 1074512 + 8 [dd]
  8,16   3    61297     3.956548142 10003  M   R 1074512 + 8 [dd]
  8,16   3    61298     3.956564182 10003  Q   R 1074520 + 8 [dd]
  8,16   3    61299     3.956564560 10003  M   R 1074520 + 8 [dd]
  8,16   3    61300     3.956580046 10003  Q   R 1074528 + 8 [dd]
  8,16   3    61301     3.956580424 10003  M   R 1074528 + 8 [dd]
  8,16   3    61302     3.956596344 10003  Q   R 1074536 + 8 [dd]
  8,16   3    61303     3.956596830 10003  M   R 1074536 + 8 [dd]
  8,16   3    61304     3.956618676 10003  Q   R 1074544 + 8 [dd]
  8,16   3    61305     3.956619159 10003  M   R 1074544 + 8 [dd]
  8,16   3    61306     3.956636342 10003  Q   R 1074552 + 8 [dd]
  8,16   3    61307     3.956636747 10003  M   R 1074552 + 8 [dd]
  8,16   3    61308     3.956652830 10003  Q   R 1074560 + 8 [dd]
  8,16   3    61309     3.956653220 10003  M   R 1074560 + 8 [dd]
  8,16   3    61310     3.956669215 10003  Q   R 1074568 + 8 [dd]
  8,16   3    61311     3.956669608 10003  M   R 1074568 + 8 [dd]
  8,16   3    61312     3.956686051 10003  Q   R 1074576 + 8 [dd]
  8,16   3    61313     3.956686447 10003  M   R 1074576 + 8 [dd]
  8,16   3    61314     3.956702016 10003  Q   R 1074584 + 8 [dd]
  8,16   3    61315     3.956702388 10003  M   R 1074584 + 8 [dd]
  8,16   3    61316     3.956718678 10003  Q   R 1074592 + 8 [dd]
  8,16   3    61317     3.956719061 10003  M   R 1074592 + 8 [dd]
  8,16   3    61318     3.956734952 10003  Q   R 1074600 + 8 [dd]
  8,16   3    61319     3.956735348 10003  M   R 1074600 + 8 [dd]
  8,16   3    61320     3.956751400 10003  Q   R 1074608 + 8 [dd]
  8,16   3    61321     3.956751796 10003  M   R 1074608 + 8 [dd]
  8,16   3    61322     3.956773951 10003  Q   R 1074616 + 8 [dd]
  8,16   3    61323     3.956774524 10003  M   R 1074616 + 8 [dd]
  8,16   3    61324     3.956790729 10003  Q   R 1074624 + 8 [dd]
  8,16   3    61325     3.956791125 10003  M   R 1074624 + 8 [dd]
  8,16   3    61326     3.956807000 10003  Q   R 1074632 + 8 [dd]
  8,16   3    61327     3.956807363 10003  M   R 1074632 + 8 [dd]
  8,16   3    61328     3.956823326 10003  Q   R 1074640 + 8 [dd]
  8,16   3    61329     3.956823698 10003  M   R 1074640 + 8 [dd]
  8,16   3    61330     3.956839609 10003  Q   R 1074648 + 8 [dd]
  8,16   3    61331     3.956839990 10003  M   R 1074648 + 8 [dd]
  8,16   3    61332     3.956855917 10003  Q   R 1074656 + 8 [dd]
  8,16   3    61333     3.956856319 10003  M   R 1074656 + 8 [dd]
  8,16   3    61334     3.956872881 10003  Q   R 1074664 + 8 [dd]
  8,16   3    61335     3.956873259 10003  M   R 1074664 + 8 [dd]
  8,16   3    61336     3.956888792 10003  Q   R 1074672 + 8 [dd]
  8,16   3    61337     3.956889155 10003  M   R 1074672 + 8 [dd]
  8,16   3    61338     3.956905229 10003  Q   R 1074680 + 8 [dd]
  8,16   3    61339     3.956905622 10003  M   R 1074680 + 8 [dd]
  8,16   3    61340     3.956906333 10003  U   N [dd] 1
  8,16   3    61341     3.956907458 10003  D   R 1074432 + 256 [dd]
  8,16   3    61342     3.957054426 10003  Q   R 1074688 + 8 [dd]
  8,16   3    61343     3.957063533 10003  G   R 1074688 + 8 [dd]
  8,16   3    61344     3.957064595 10003  P   N [dd]
  8,16   3    61345     3.957065135 10003  I   R 1074688 + 8 [dd]
  8,16   3    61346     3.957088720 10003  Q   R 1074696 + 8 [dd]
  8,16   3    61347     3.957089287 10003  M   R 1074696 + 8 [dd]
  8,16   3    61348     3.957105331 10003  Q   R 1074704 + 8 [dd]
  8,16   3    61349     3.957105721 10003  M   R 1074704 + 8 [dd]
  8,16   3    61350     3.957121482 10003  Q   R 1074712 + 8 [dd]
  8,16   3    61351     3.957121860 10003  M   R 1074712 + 8 [dd]
  8,16   3    61352     3.957137670 10003  Q   R 1074720 + 8 [dd]
  8,16   3    61353     3.957138027 10003  M   R 1074720 + 8 [dd]
  8,16   3    61354     3.957154040 10003  Q   R 1074728 + 8 [dd]
  8,16   3    61355     3.957154391 10003  M   R 1074728 + 8 [dd]
  8,16   3    61356     3.957169765 10003  Q   R 1074736 + 8 [dd]
  8,16   3    61357     3.957170131 10003  M   R 1074736 + 8 [dd]
  8,16   3    61358     3.957185362 10003  Q   R 1074744 + 8 [dd]
  8,16   3    61359     3.957185734 10003  M   R 1074744 + 8 [dd]
  8,16   3    61360     3.957201543 10003  Q   R 1074752 + 8 [dd]
  8,16   3    61361     3.957201906 10003  M   R 1074752 + 8 [dd]
  8,16   3    61362     3.957217245 10003  Q   R 1074760 + 8 [dd]
  8,16   3    61363     3.957217605 10003  M   R 1074760 + 8 [dd]
  8,16   3    61364     3.957233075 10003  Q   R 1074768 + 8 [dd]
  8,16   3    61365     3.957233435 10003  M   R 1074768 + 8 [dd]
  8,16   3    61366     3.957250151 10003  Q   R 1074776 + 8 [dd]
  8,16   3    61367     3.957250550 10003  M   R 1074776 + 8 [dd]
  8,16   3    61368     3.957266302 10003  Q   R 1074784 + 8 [dd]
  8,16   3    61369     3.957266701 10003  M   R 1074784 + 8 [dd]
  8,16   3    61370     3.957294819 10003  Q   R 1074792 + 8 [dd]
  8,16   3    61371     3.957295407 10003  M   R 1074792 + 8 [dd]
  8,16   3    61372     3.957317765 10003  Q   R 1074800 + 8 [dd]
  8,16   3    61373     3.957318296 10003  M   R 1074800 + 8 [dd]
  8,16   3    61374     3.957334388 10003  Q   R 1074808 + 8 [dd]
  8,16   3    61375     3.957334838 10003  M   R 1074808 + 8 [dd]
  8,16   3    61376     3.957350506 10003  Q   R 1074816 + 8 [dd]
  8,16   3    61377     3.957350881 10003  M   R 1074816 + 8 [dd]
  8,16   3    61378     3.957366655 10003  Q   R 1074824 + 8 [dd]
  8,16   3    61379     3.957367026 10003  M   R 1074824 + 8 [dd]
  8,16   3    61380     3.957382572 10003  Q   R 1074832 + 8 [dd]
  8,16   3    61381     3.957382950 10003  M   R 1074832 + 8 [dd]
  8,16   3    61382     3.957398531 10003  Q   R 1074840 + 8 [dd]
  8,16   3    61383     3.957398906 10003  M   R 1074840 + 8 [dd]
  8,16   1    71055     3.957409481     0  C   R 1074432 + 256 [0]
  8,16   3    61384     3.957415334 10003  Q   R 1074848 + 8 [dd]
  8,16   3    61385     3.957415841 10003  M   R 1074848 + 8 [dd]
  8,16   3    61386     3.957432655 10003  Q   R 1074856 + 8 [dd]
  8,16   3    61387     3.957433144 10003  M   R 1074856 + 8 [dd]
  8,16   3    61388     3.957450181 10003  Q   R 1074864 + 8 [dd]
  8,16   3    61389     3.957450553 10003  M   R 1074864 + 8 [dd]
  8,16   3    61390     3.957466500 10003  Q   R 1074872 + 8 [dd]
  8,16   3    61391     3.957466902 10003  M   R 1074872 + 8 [dd]
  8,16   3    61392     3.957483419 10003  Q   R 1074880 + 8 [dd]
  8,16   3    61393     3.957483815 10003  M   R 1074880 + 8 [dd]
  8,16   3    61394     3.957500783 10003  Q   R 1074888 + 8 [dd]
  8,16   3    61395     3.957501170 10003  M   R 1074888 + 8 [dd]
  8,16   3    61396     3.957517870 10003  Q   R 1074896 + 8 [dd]
  8,16   3    61397     3.957518251 10003  M   R 1074896 + 8 [dd]
  8,16   3    61398     3.957534190 10003  Q   R 1074904 + 8 [dd]
  8,16   3    61399     3.957534556 10003  M   R 1074904 + 8 [dd]
  8,16   3    61400     3.957550728 10003  Q   R 1074912 + 8 [dd]
  8,16   3    61401     3.957551094 10003  M   R 1074912 + 8 [dd]
  8,16   1    71056     3.957563616     0  D   R 1074688 + 232 [kworker/0:0]
  8,16   3    61402     3.957608098 10003  Q   R 1074920 + 8 [dd]
  8,16   3    61403     3.957618367 10003  G   R 1074920 + 8 [dd]
  8,16   3    61404     3.957619564 10003  P   N [dd]
  8,16   3    61405     3.957620086 10003  I   R 1074920 + 8 [dd]
  8,16   3    61406     3.957636099 10003  Q   R 1074928 + 8 [dd]
  8,16   3    61407     3.957636630 10003  M   R 1074928 + 8 [dd]
  8,16   3    61408     3.957665057 10003  Q   R 1074936 + 8 [dd]
  8,16   3    61409     3.957665675 10003  M   R 1074936 + 8 [dd]
  8,16   3    61410     3.957666341 10003  U   N [dd] 2
  8,16   3    61411     3.957667469 10003  D   R 1074920 + 24 [dd]
  8,16   3    61412     3.957812091 10003  Q   R 1074944 + 8 [dd]
  8,16   3    61413     3.957821259 10003  G   R 1074944 + 8 [dd]
  8,16   3    61414     3.957822270 10003  P   N [dd]
  8,16   3    61415     3.957822741 10003  I   R 1074944 + 8 [dd]
  8,16   3    61416     3.957838985 10003  Q   R 1074952 + 8 [dd]
  8,16   3    61417     3.957839435 10003  M   R 1074952 + 8 [dd]
  8,16   3    61418     3.957855185 10003  Q   R 1074960 + 8 [dd]
  8,16   3    61419     3.957855545 10003  M   R 1074960 + 8 [dd]
  8,16   3    61420     3.957871480 10003  Q   R 1074968 + 8 [dd]
  8,16   3    61421     3.957871837 10003  M   R 1074968 + 8 [dd]
  8,16   3    61422     3.957887530 10003  Q   R 1074976 + 8 [dd]
  8,16   3    61423     3.957887970 10003  M   R 1074976 + 8 [dd]
  8,16   3    61424     3.957909963 10003  Q   R 1074984 + 8 [dd]
  8,16   3    61425     3.957910518 10003  M   R 1074984 + 8 [dd]
  8,16   3    61426     3.957926171 10003  Q   R 1074992 + 8 [dd]
  8,16   3    61427     3.957926534 10003  M   R 1074992 + 8 [dd]
  8,16   3    61428     3.957943097 10003  Q   R 1075000 + 8 [dd]
  8,16   3    61429     3.957943460 10003  M   R 1075000 + 8 [dd]
  8,16   3    61430     3.957958870 10003  Q   R 1075008 + 8 [dd]
  8,16   3    61431     3.957959239 10003  M   R 1075008 + 8 [dd]
  8,16   3    61432     3.957974724 10003  Q   R 1075016 + 8 [dd]
  8,16   3    61433     3.957975096 10003  M   R 1075016 + 8 [dd]
  8,16   3    61434     3.957990396 10003  Q   R 1075024 + 8 [dd]
  8,16   3    61435     3.957990768 10003  M   R 1075024 + 8 [dd]
  8,16   3    61436     3.958006448 10003  Q   R 1075032 + 8 [dd]
  8,16   3    61437     3.958006826 10003  M   R 1075032 + 8 [dd]
  8,16   3    61438     3.958022327 10003  Q   R 1075040 + 8 [dd]
  8,16   3    61439     3.958022696 10003  M   R 1075040 + 8 [dd]
  8,16   3    61440     3.958038454 10003  Q   R 1075048 + 8 [dd]
  8,16   3    61441     3.958038814 10003  M   R 1075048 + 8 [dd]
  8,16   1    71057     3.958042243     0  C   R 1074688 + 232 [0]
  8,16   3    61442     3.958056061 10003  Q   R 1075056 + 8 [dd]
  8,16   3    61443     3.958056430 10003  M   R 1075056 + 8 [dd]
  8,16   3    61444     3.958084746 10003  Q   R 1075064 + 8 [dd]
  8,16   3    61445     3.958085318 10003  M   R 1075064 + 8 [dd]
  8,16   3    61446     3.958101767 10003  Q   R 1075072 + 8 [dd]
  8,16   3    61447     3.958102145 10003  M   R 1075072 + 8 [dd]
  8,16   3    61448     3.958118629 10003  Q   R 1075080 + 8 [dd]
  8,16   3    61449     3.958119004 10003  M   R 1075080 + 8 [dd]
  8,16   3    61450     3.958135438 10003  Q   R 1075088 + 8 [dd]
  8,16   3    61451     3.958135831 10003  M   R 1075088 + 8 [dd]
  8,16   3    61452     3.958157628 10003  Q   R 1075096 + 8 [dd]
  8,16   3    61453     3.958158006 10003  M   R 1075096 + 8 [dd]
  8,16   1    71058     3.958171826     0  D   R 1074944 + 160 [kworker/0:0]
  8,16   3    61454     3.958197161 10003  Q   R 1075104 + 8 [dd]
  8,16   1    71059     3.958200658     0  C   R 1074920 + 24 [0]
  8,16   3    61455     3.958206274 10003  G   R 1075104 + 8 [dd]
  8,16   3    61456     3.958207471 10003  P   N [dd]
  8,16   3    61457     3.958208050 10003  I   R 1075104 + 8 [dd]
  8,16   3    61458     3.958225312 10003  Q   R 1075112 + 8 [dd]
  8,16   3    61459     3.958231650 10003  M   R 1075112 + 8 [dd]
  8,16   1    71060     3.958245162     0  D   R 1075104 + 16 [kworker/0:0]
  8,16   3    61460     3.958268858 10003  Q   R 1075120 + 8 [dd]
  8,16   3    61461     3.958278005 10003  G   R 1075120 + 8 [dd]
  8,16   3    61462     3.958279130 10003  P   N [dd]
  8,16   3    61463     3.958279703 10003  I   R 1075120 + 8 [dd]
  8,16   3    61464     3.958295791 10003  Q   R 1075128 + 8 [dd]
  8,16   3    61465     3.958296217 10003  M   R 1075128 + 8 [dd]
  8,16   3    61466     3.958312035 10003  Q   R 1075136 + 8 [dd]
  8,16   3    61467     3.958312416 10003  M   R 1075136 + 8 [dd]
  8,16   3    61468     3.958328073 10003  Q   R 1075144 + 8 [dd]
  8,16   3    61469     3.958328550 10003  M   R 1075144 + 8 [dd]
  8,16   3    61470     3.958344233 10003  Q   R 1075152 + 8 [dd]
  8,16   3    61471     3.958344722 10003  M   R 1075152 + 8 [dd]
  8,16   3    61472     3.958360325 10003  Q   R 1075160 + 8 [dd]
  8,16   3    61473     3.958360691 10003  M   R 1075160 + 8 [dd]
  8,16   3    61474     3.958382956 10003  Q   R 1075168 + 8 [dd]
  8,16   3    61475     3.958383514 10003  M   R 1075168 + 8 [dd]
  8,16   3    61476     3.958399299 10003  Q   R 1075176 + 8 [dd]
  8,16   3    61477     3.958399668 10003  M   R 1075176 + 8 [dd]
  8,16   3    61478     3.958415511 10003  Q   R 1075184 + 8 [dd]
  8,16   3    61479     3.958415922 10003  M   R 1075184 + 8 [dd]
  8,16   3    61480     3.958431683 10003  Q   R 1075192 + 8 [dd]
  8,16   3    61481     3.958432049 10003  M   R 1075192 + 8 [dd]
  8,16   3    61482     3.958432622 10003  U   N [dd] 3
  8,16   3    61483     3.958433723 10003  D   R 1075120 + 80 [dd]
  8,16   1    71061     3.958504329     0  C   R 1074944 + 160 [0]
  8,16   3    61484     3.958584882 10003  Q   R 1075200 + 8 [dd]
  8,16   1    71062     3.958593116     0  C   R 1075104 + 16 [0]
  8,16   3    61485     3.958594265 10003  G   R 1075200 + 8 [dd]
  8,16   3    61486     3.958595504 10003  P   N [dd]
  8,16   3    61487     3.958596017 10003  I   R 1075200 + 8 [dd]
  8,16   1    71063     3.958626308     0  D   R 1075200 + 8 [kworker/0:0]
  8,16   3    61488     3.958650175 10003  Q   R 1075208 + 8 [dd]
  8,16   3    61489     3.958659142 10003  G   R 1075208 + 8 [dd]
  8,16   3    61490     3.958660324 10003  P   N [dd]
  8,16   3    61491     3.958660846 10003  I   R 1075208 + 8 [dd]
  8,16   3    61492     3.958676802 10003  Q   R 1075216 + 8 [dd]
  8,16   3    61493     3.958677396 10003  M   R 1075216 + 8 [dd]
  8,16   3    61494     3.958694057 10003  Q   R 1075224 + 8 [dd]
  8,16   3    61495     3.958694441 10003  M   R 1075224 + 8 [dd]
  8,16   3    61496     3.958710182 10003  Q   R 1075232 + 8 [dd]
  8,16   3    61497     3.958710551 10003  M   R 1075232 + 8 [dd]
  8,16   3    61498     3.958726009 10003  Q   R 1075240 + 8 [dd]
  8,16   3    61499     3.958726378 10003  M   R 1075240 + 8 [dd]
  8,16   3    61500     3.958741702 10003  Q   R 1075248 + 8 [dd]
  8,16   3    61501     3.958742065 10003  M   R 1075248 + 8 [dd]
  8,16   1    71064     3.958752333     9  C   R 1075120 + 80 [0]
  8,16   3    61502     3.958758396 10003  Q   R 1075256 + 8 [dd]
  8,16   3    61503     3.958758843 10003  M   R 1075256 + 8 [dd]
  8,16   3    61504     3.958775207 10003  Q   R 1075264 + 8 [dd]
  8,16   3    61505     3.958775579 10003  M   R 1075264 + 8 [dd]
  8,16   3    61506     3.958792895 10003  Q   R 1075272 + 8 [dd]
  8,16   3    61507     3.958793279 10003  M   R 1075272 + 8 [dd]
  8,16   3    61508     3.958809265 10003  Q   R 1075280 + 8 [dd]
  8,16   3    61509     3.958809667 10003  M   R 1075280 + 8 [dd]
  8,16   1    71065     3.958817353     9  D   R 1075208 + 80 [ksoftirqd/1]
  8,16   3    61510     3.958843599 10003  Q   R 1075288 + 8 [dd]
  8,16   3    61511     3.958852704 10003  G   R 1075288 + 8 [dd]
  8,16   3    61512     3.958853889 10003  P   N [dd]
  8,16   3    61513     3.958854369 10003  I   R 1075288 + 8 [dd]
  8,16   1    71066     3.958857606     9  C   R 1075200 + 8 [0]
  8,16   3    61514     3.958870874 10003  Q   R 1075296 + 8 [dd]
  8,16   3    61515     3.958871321 10003  M   R 1075296 + 8 [dd]
  8,16   1    71067     3.958883882     9  D   R 1075288 + 16 [ksoftirqd/1]
  8,16   3    61516     3.958907146 10003  Q   R 1075304 + 8 [dd]
  8,16   3    61517     3.958916112 10003  G   R 1075304 + 8 [dd]
  8,16   3    61518     3.958917234 10003  P   N [dd]
  8,16   3    61519     3.958917612 10003  I   R 1075304 + 8 [dd]
  8,16   3    61520     3.958933260 10003  Q   R 1075312 + 8 [dd]
  8,16   3    61521     3.958933803 10003  M   R 1075312 + 8 [dd]
  8,16   3    61522     3.958950137 10003  Q   R 1075320 + 8 [dd]
  8,16   3    61523     3.958950527 10003  M   R 1075320 + 8 [dd]
  8,16   3    61524     3.958965869 10003  Q   R 1075328 + 8 [dd]
  8,16   3    61525     3.958966262 10003  M   R 1075328 + 8 [dd]
  8,16   3    61526     3.958982608 10003  Q   R 1075336 + 8 [dd]
  8,16   3    61527     3.958982977 10003  M   R 1075336 + 8 [dd]
  8,16   3    61528     3.958999093 10003  Q   R 1075344 + 8 [dd]
  8,16   3    61529     3.958999489 10003  M   R 1075344 + 8 [dd]
  8,16   1    71068     3.959020698     9  C   R 1075208 + 80 [0]
  8,16   3    61530     3.959021787 10003  Q   R 1075352 + 8 [dd]
  8,16   3    61531     3.959022351 10003  M   R 1075352 + 8 [dd]
  8,16   3    61532     3.959039957 10003  Q   R 1075360 + 8 [dd]
  8,16   3    61533     3.959040365 10003  M   R 1075360 + 8 [dd]
  8,16   3    61534     3.959057422 10003  Q   R 1075368 + 8 [dd]
  8,16   3    61535     3.959057812 10003  M   R 1075368 + 8 [dd]
  8,16   3    61536     3.959074435 10003  Q   R 1075376 + 8 [dd]
  8,16   3    61537     3.959074795 10003  M   R 1075376 + 8 [dd]
  8,16   1    71069     3.959081413     9  D   R 1075304 + 80 [ksoftirqd/1]
  8,16   3    61538     3.959105907 10003  Q   R 1075384 + 8 [dd]
  8,16   3    61539     3.959114924 10003  G   R 1075384 + 8 [dd]
  8,16   3    61540     3.959116034 10003  P   N [dd]
  8,16   3    61541     3.959116457 10003  I   R 1075384 + 8 [dd]
  8,16   1    71070     3.959120735     9  C   R 1075288 + 16 [0]
  8,16   3    61542     3.959138171 10003  Q   R 1075392 + 8 [dd]
  8,16   3    61543     3.959138651 10003  M   R 1075392 + 8 [dd]
  8,16   1    71071     3.959151304     9  D   R 1075384 + 16 [ksoftirqd/1]
  8,16   3    61544     3.959174550 10003  Q   R 1075400 + 8 [dd]
  8,16   3    61545     3.959183502 10003  G   R 1075400 + 8 [dd]
  8,16   3    61546     3.959184657 10003  P   N [dd]
  8,16   3    61547     3.959185113 10003  I   R 1075400 + 8 [dd]
  8,16   3    61548     3.959200916 10003  Q   R 1075408 + 8 [dd]
  8,16   3    61549     3.959201501 10003  M   R 1075408 + 8 [dd]
  8,16   3    61550     3.959217287 10003  Q   R 1075416 + 8 [dd]
  8,16   3    61551     3.959217701 10003  M   R 1075416 + 8 [dd]
  8,16   3    61552     3.959233732 10003  Q   R 1075424 + 8 [dd]
  8,16   3    61553     3.959234134 10003  M   R 1075424 + 8 [dd]
  8,16   3    61554     3.959249593 10003  Q   R 1075432 + 8 [dd]
  8,16   3    61555     3.959249947 10003  M   R 1075432 + 8 [dd]
  8,16   3    61556     3.959265567 10003  Q   R 1075440 + 8 [dd]
  8,16   3    61557     3.959266053 10003  M   R 1075440 + 8 [dd]
  8,16   1    71072     3.959271573     0  C   R 1075304 + 80 [0]
  8,16   3    61558     3.959283176 10003  Q   R 1075448 + 8 [dd]
  8,16   3    61559     3.959283542 10003  M   R 1075448 + 8 [dd]
  8,16   3    61560     3.959284211 10003  U   N [dd] 3
  8,16   3    61561     3.959285324 10003  D   R 1075400 + 56 [dd]
  8,16   1    71073     3.959334457     0  C   R 1075384 + 16 [0]
  8,16   3    61562     3.959428005 10003  Q   R 1075456 + 8 [dd]
  8,16   3    61563     3.959437422 10003  G   R 1075456 + 8 [dd]
  8,16   3    61564     3.959438628 10003  P   N [dd]
  8,16   3    61565     3.959439222 10003  I   R 1075456 + 8 [dd]
  8,16   3    61566     3.959454899 10003  Q   R 1075464 + 8 [dd]
  8,16   3    61567     3.959455331 10003  M   R 1075464 + 8 [dd]
  8,16   1    71074     3.959458571     0  C   R 1075400 + 56 [0]
  8,16   3    61568     3.959472050 10003  Q   R 1075472 + 8 [dd]
  8,16   3    61569     3.959472407 10003  M   R 1075472 + 8 [dd]
  8,16   3    61570     3.959491747 10003  Q   R 1075480 + 8 [dd]
  8,16   3    61571     3.959492095 10003  M   R 1075480 + 8 [dd]
  8,16   1    71075     3.959505145     0  D   R 1075456 + 32 [kworker/0:0]
  8,16   3    61572     3.959529492 10003  Q   R 1075488 + 8 [dd]
  8,16   3    61573     3.959538599 10003  G   R 1075488 + 8 [dd]
  8,16   3    61574     3.959539793 10003  P   N [dd]
  8,16   3    61575     3.959540366 10003  I   R 1075488 + 8 [dd]
  8,16   3    61576     3.959556521 10003  Q   R 1075496 + 8 [dd]
  8,16   3    61577     3.959557031 10003  M   R 1075496 + 8 [dd]
  8,16   3    61578     3.959572384 10003  Q   R 1075504 + 8 [dd]
  8,16   3    61579     3.959572765 10003  M   R 1075504 + 8 [dd]
  8,16   3    61580     3.959588005 10003  Q   R 1075512 + 8 [dd]
  8,16   3    61581     3.959588365 10003  M   R 1075512 + 8 [dd]
  8,16   1    71076     3.959609526     0  C   R 1075456 + 32 [0]
  8,16   3    61582     3.959612382 10003  Q   R 1075520 + 8 [dd]
  8,16   3    61583     3.959612874 10003  M   R 1075520 + 8 [dd]
  8,16   3    61584     3.959633979 10003  Q   R 1075528 + 8 [dd]
  8,16   3    61585     3.959634377 10003  M   R 1075528 + 8 [dd]
  8,16   1    71077     3.959647013     0  D   R 1075488 + 48 [kworker/0:0]
  8,16   3    61586     3.959671300 10003  Q   R 1075536 + 8 [dd]
  8,16   3    61587     3.959680429 10003  G   R 1075536 + 8 [dd]
  8,16   3    61588     3.959681677 10003  P   N [dd]
  8,16   3    61589     3.959682190 10003  I   R 1075536 + 8 [dd]
  8,16   3    61590     3.959698194 10003  Q   R 1075544 + 8 [dd]
  8,16   3    61591     3.959698638 10003  M   R 1075544 + 8 [dd]
  8,16   3    61592     3.959714169 10003  Q   R 1075552 + 8 [dd]
  8,16   3    61593     3.959714538 10003  M   R 1075552 + 8 [dd]
  8,16   3    61594     3.959731664 10003  Q   R 1075560 + 8 [dd]
  8,16   3    61595     3.959732069 10003  M   R 1075560 + 8 [dd]
  8,16   3    61596     3.959747359 10003  Q   R 1075568 + 8 [dd]
  8,16   3    61597     3.959747728 10003  M   R 1075568 + 8 [dd]
  8,16   3    61598     3.959763127 10003  Q   R 1075576 + 8 [dd]
  8,16   3    61599     3.959763520 10003  M   R 1075576 + 8 [dd]
  8,16   3    61600     3.959778966 10003  Q   R 1075584 + 8 [dd]
  8,16   1    71078     3.959779290     0  C   R 1075488 + 48 [0]
  8,16   3    61601     3.959779455 10003  M   R 1075584 + 8 [dd]
  8,16   3    61602     3.959796513 10003  Q   R 1075592 + 8 [dd]
  8,16   3    61603     3.959796921 10003  M   R 1075592 + 8 [dd]
  8,16   3    61604     3.959813948 10003  Q   R 1075600 + 8 [dd]
  8,16   3    61605     3.959814428 10003  M   R 1075600 + 8 [dd]
  8,16   1    71079     3.959823254     0  D   R 1075536 + 72 [kworker/0:0]
  8,16   3    61606     3.959848024 10003  Q   R 1075608 + 8 [dd]
  8,16   3    61607     3.959857075 10003  G   R 1075608 + 8 [dd]
  8,16   3    61608     3.959858230 10003  P   N [dd]
  8,16   3    61609     3.959858770 10003  I   R 1075608 + 8 [dd]
  8,16   3    61610     3.959874462 10003  Q   R 1075616 + 8 [dd]
  8,16   3    61611     3.959874909 10003  M   R 1075616 + 8 [dd]
  8,16   3    61612     3.959890391 10003  Q   R 1075624 + 8 [dd]
  8,16   3    61613     3.959890778 10003  M   R 1075624 + 8 [dd]
  8,16   3    61614     3.959906378 10003  Q   R 1075632 + 8 [dd]
  8,16   3    61615     3.959906774 10003  M   R 1075632 + 8 [dd]
  8,16   3    61616     3.959922061 10003  Q   R 1075640 + 8 [dd]
  8,16   3    61617     3.959922448 10003  M   R 1075640 + 8 [dd]
  8,16   3    61618     3.959938495 10003  Q   R 1075648 + 8 [dd]
  8,16   3    61619     3.959938909 10003  M   R 1075648 + 8 [dd]
  8,16   3    61620     3.959954472 10003  Q   R 1075656 + 8 [dd]
  8,16   3    61621     3.959954859 10003  M   R 1075656 + 8 [dd]
  8,16   3    61622     3.959970372 10003  Q   R 1075664 + 8 [dd]
  8,16   3    61623     3.959970771 10003  M   R 1075664 + 8 [dd]
  8,16   3    61624     3.959987411 10003  Q   R 1075672 + 8 [dd]
  8,16   3    61625     3.959987861 10003  M   R 1075672 + 8 [dd]
  8,16   1    71080     3.959999729     0  C   R 1075536 + 72 [0]
  8,16   3    61626     3.960004864 10003  Q   R 1075680 + 8 [dd]
  8,16   3    61627     3.960005254 10003  M   R 1075680 + 8 [dd]
  8,16   3    61628     3.960022414 10003  Q   R 1075688 + 8 [dd]
  8,16   3    61629     3.960022804 10003  M   R 1075688 + 8 [dd]
  8,16   3    61630     3.960044304 10003  Q   R 1075696 + 8 [dd]
  8,16   3    61631     3.960044697 10003  M   R 1075696 + 8 [dd]
  8,16   1    71081     3.960057383     0  D   R 1075608 + 96 [kworker/0:0]
  8,16   3    61632     3.960081875 10003  Q   R 1075704 + 8 [dd]
  8,16   3    61633     3.960090928 10003  G   R 1075704 + 8 [dd]
  8,16   3    61634     3.960091999 10003  P   N [dd]
  8,16   3    61635     3.960092473 10003  I   R 1075704 + 8 [dd]
  8,16   3    61636     3.960093019 10003  U   N [dd] 2
  8,16   3    61637     3.960094261 10003  D   R 1075704 + 8 [dd]
  8,16   3    61638     3.960227696 10003  Q   R 1075712 + 8 [dd]
  8,16   3    61639     3.960236645 10003  G   R 1075712 + 8 [dd]
  8,16   3    61640     3.960237602 10003  P   N [dd]
  8,16   3    61641     3.960238226 10003  I   R 1075712 + 8 [dd]
  8,16   3    61642     3.960266728 10003  Q   R 1075720 + 8 [dd]
  8,16   3    61643     3.960267391 10003  M   R 1075720 + 8 [dd]
  8,16   1    71082     3.960275476     0  C   R 1075608 + 96 [0]
  8,16   3    61644     3.960284793 10003  Q   R 1075728 + 8 [dd]
  8,16   3    61645     3.960285222 10003  M   R 1075728 + 8 [dd]
  8,16   3    61646     3.960302349 10003  Q   R 1075736 + 8 [dd]
  8,16   3    61647     3.960302757 10003  M   R 1075736 + 8 [dd]
  8,16   3    61648     3.960319169 10003  Q   R 1075744 + 8 [dd]
  8,16   3    61649     3.960319556 10003  M   R 1075744 + 8 [dd]
  8,16   3    61650     3.960336002 10003  Q   R 1075752 + 8 [dd]
  8,16   3    61651     3.960336377 10003  M   R 1075752 + 8 [dd]
  8,16   1    71083     3.960343882     0  D   R 1075712 + 48 [kworker/0:0]
  8,16   3    61652     3.960369747 10003  Q   R 1075760 + 8 [dd]
  8,16   1    71084     3.960372969     0  C   R 1075704 + 8 [0]
  8,16   3    61653     3.960378843 10003  G   R 1075760 + 8 [dd]
  8,16   3    61654     3.960384918 10003  P   N [dd]
  8,16   3    61655     3.960385602 10003  I   R 1075760 + 8 [dd]
  8,16   1    71085     3.960398510     0  D   R 1075760 + 8 [kworker/0:0]
  8,16   3    61656     3.960421883 10003  Q   R 1075768 + 8 [dd]
  8,16   3    61657     3.960430969 10003  G   R 1075768 + 8 [dd]
  8,16   3    61658     3.960432121 10003  P   N [dd]
  8,16   3    61659     3.960432706 10003  I   R 1075768 + 8 [dd]
  8,16   3    61660     3.960448243 10003  Q   R 1075776 + 8 [dd]
  8,16   3    61661     3.960448816 10003  M   R 1075776 + 8 [dd]
  8,16   3    61662     3.960465393 10003  Q   R 1075784 + 8 [dd]
  8,16   3    61663     3.960465762 10003  M   R 1075784 + 8 [dd]
  8,16   1    71086     3.960477717     0  C   R 1075712 + 48 [0]
  8,16   3    61664     3.960482399 10003  Q   R 1075792 + 8 [dd]
  8,16   3    61665     3.960482786 10003  M   R 1075792 + 8 [dd]
  8,16   3    61666     3.960500039 10003  Q   R 1075800 + 8 [dd]
  8,16   3    61667     3.960500453 10003  M   R 1075800 + 8 [dd]
  8,16   3    61668     3.960516283 10003  Q   R 1075808 + 8 [dd]
  8,16   3    61669     3.960516733 10003  M   R 1075808 + 8 [dd]
  8,16   1    71087     3.960522754     0  D   R 1075768 + 48 [kworker/0:0]
  8,16   3    61670     3.960546267 10003  Q   R 1075816 + 8 [dd]
  8,16   1    71088     3.960549525     0  C   R 1075760 + 8 [0]
  8,16   3    61671     3.960555519 10003  G   R 1075816 + 8 [dd]
  8,16   3    61672     3.960561462 10003  P   N [dd]
  8,16   3    61673     3.960562143 10003  I   R 1075816 + 8 [dd]
  8,16   1    71089     3.960574499     0  D   R 1075816 + 8 [kworker/0:0]
  8,16   3    61674     3.960597655 10003  Q   R 1075824 + 8 [dd]
  8,16   3    61675     3.960613075 10003  G   R 1075824 + 8 [dd]
  8,16   3    61676     3.960614239 10003  P   N [dd]
  8,16   3    61677     3.960614782 10003  I   R 1075824 + 8 [dd]
  8,16   3    61678     3.960631849 10003  Q   R 1075832 + 8 [dd]
  8,16   3    61679     3.960632344 10003  M   R 1075832 + 8 [dd]
  8,16   3    61680     3.960648741 10003  Q   R 1075840 + 8 [dd]
  8,16   3    61681     3.960649131 10003  M   R 1075840 + 8 [dd]
  8,16   1    71090     3.960655050     0  C   R 1075768 + 48 [0]
  8,16   3    61682     3.960666407 10003  Q   R 1075848 + 8 [dd]
  8,16   3    61683     3.960666833 10003  M   R 1075848 + 8 [dd]
  8,16   3    61684     3.960687557 10003  Q   R 1075856 + 8 [dd]
  8,16   3    61685     3.960687983 10003  M   R 1075856 + 8 [dd]
  8,16   1    71091     3.960700408     0  D   R 1075824 + 40 [kworker/0:0]
  8,16   3    61686     3.960723687 10003  Q   R 1075864 + 8 [dd]
  8,16   1    71092     3.960727011     0  C   R 1075816 + 8 [0]
  8,16   3    61687     3.960733311 10003  G   R 1075864 + 8 [dd]
  8,16   3    61688     3.960738624 10003  P   N [dd]
  8,16   3    61689     3.960739242 10003  I   R 1075864 + 8 [dd]
  8,16   1    71093     3.960751607     0  D   R 1075864 + 8 [kworker/0:0]
  8,16   3    61690     3.960774479 10003  Q   R 1075872 + 8 [dd]
  8,16   3    61691     3.960784012 10003  G   R 1075872 + 8 [dd]
  8,16   3    61692     3.960785089 10003  P   N [dd]
  8,16   3    61693     3.960785596 10003  I   R 1075872 + 8 [dd]
  8,16   3    61694     3.960802003 10003  Q   R 1075880 + 8 [dd]
  8,16   3    61695     3.960802474 10003  M   R 1075880 + 8 [dd]
  8,16   3    61696     3.960818580 10003  Q   R 1075888 + 8 [dd]
  8,16   1    71094     3.960818970     0  C   R 1075824 + 40 [0]
  8,16   3    61697     3.960819117 10003  M   R 1075888 + 8 [dd]
  8,16   3    61698     3.960836885 10003  Q   R 1075896 + 8 [dd]
  8,16   3    61699     3.960837293 10003  M   R 1075896 + 8 [dd]
  8,16   1    71095     3.960859373     0  D   R 1075872 + 32 [kworker/0:0]
  8,16   3    61700     3.960882532 10003  Q   R 1075904 + 8 [dd]
  8,16   1    71096     3.960885883     0  C   R 1075864 + 8 [0]
  8,16   3    61701     3.960891972 10003  G   R 1075904 + 8 [dd]
  8,16   3    61702     3.960897417 10003  P   N [dd]
  8,16   3    61703     3.960898194 10003  I   R 1075904 + 8 [dd]
  8,16   1    71097     3.960910374     0  D   R 1075904 + 8 [kworker/0:0]
  8,16   3    61704     3.960934874 10003  Q   R 1075912 + 8 [dd]
  8,16   3    61705     3.960944491 10003  G   R 1075912 + 8 [dd]
  8,16   3    61706     3.960945778 10003  P   N [dd]
  8,16   3    61707     3.960946498 10003  I   R 1075912 + 8 [dd]
  8,16   3    61708     3.960962494 10003  Q   R 1075920 + 8 [dd]
  8,16   1    71098     3.960962878     0  C   R 1075872 + 32 [0]
  8,16   3    61709     3.960963139 10003  M   R 1075920 + 8 [dd]
  8,16   3    61710     3.960984975 10003  Q   R 1075928 + 8 [dd]
  8,16   3    61711     3.960985482 10003  M   R 1075928 + 8 [dd]
  8,16   1    71099     3.960998046     0  D   R 1075912 + 24 [kworker/0:0]
  8,16   3    61712     3.961023695 10003  Q   R 1075936 + 8 [dd]
  8,16   1    71100     3.961028150     0  C   R 1075904 + 8 [0]
  8,16   3    61713     3.961032883 10003  G   R 1075936 + 8 [dd]
  8,16   3    61714     3.961034122 10003  P   N [dd]
  8,16   3    61715     3.961034710 10003  I   R 1075936 + 8 [dd]
  8,16   3    61716     3.961052257 10003  Q   R 1075944 + 8 [dd]
  8,16   3    61717     3.961052803 10003  M   R 1075944 + 8 [dd]
  8,16   1    71101     3.961055083     0  D   R 1075936 + 16 [kworker/0:0]
  8,16   3    61718     3.961079550 10003  Q   R 1075952 + 8 [dd]
  8,16   3    61719     3.961088528 10003  G   R 1075952 + 8 [dd]
  8,16   3    61720     3.961089791 10003  P   N [dd]
  8,16   3    61721     3.961090274 10003  I   R 1075952 + 8 [dd]
  8,16   1    71102     3.961091075     0  C   R 1075912 + 24 [0]
  8,16   3    61722     3.961111061 10003  Q   R 1075960 + 8 [dd]
  8,16   3    61723     3.961111529 10003  M   R 1075960 + 8 [dd]
  8,16   3    61724     3.961112435 10003  U   N [dd] 2
  8,16   3    61725     3.961113466 10003  D   R 1075952 + 16 [dd]
  8,16   1    71103     3.961145814     0  C   R 1075936 + 16 [0]
  8,16   1    71104     3.961189592     0  C   R 1075952 + 16 [0]
  8,16   3    61726     3.961254485 10003  Q   R 1075968 + 8 [dd]
  8,16   3    61727     3.961264376 10003  G   R 1075968 + 8 [dd]
  8,16   3    61728     3.961265594 10003  P   N [dd]
  8,16   3    61729     3.961266155 10003  I   R 1075968 + 8 [dd]
  8,16   3    61730     3.961282426 10003  Q   R 1075976 + 8 [dd]
  8,16   3    61731     3.961282897 10003  M   R 1075976 + 8 [dd]
  8,16   3    61732     3.961298368 10003  Q   R 1075984 + 8 [dd]
  8,16   3    61733     3.961298758 10003  M   R 1075984 + 8 [dd]
  8,16   3    61734     3.961314870 10003  Q   R 1075992 + 8 [dd]
  8,16   3    61735     3.961315293 10003  M   R 1075992 + 8 [dd]
  8,16   3    61736     3.961331235 10003  Q   R 1076000 + 8 [dd]
  8,16   3    61737     3.961331640 10003  M   R 1076000 + 8 [dd]
  8,16   3    61738     3.961348361 10003  Q   R 1076008 + 8 [dd]
  8,16   3    61739     3.961348748 10003  M   R 1076008 + 8 [dd]
  8,16   3    61740     3.961364441 10003  Q   R 1076016 + 8 [dd]
  8,16   3    61741     3.961364810 10003  M   R 1076016 + 8 [dd]
  8,16   3    61742     3.961380232 10003  Q   R 1076024 + 8 [dd]
  8,16   3    61743     3.961380595 10003  M   R 1076024 + 8 [dd]
  8,16   3    61744     3.961396020 10003  Q   R 1076032 + 8 [dd]
  8,16   3    61745     3.961396407 10003  M   R 1076032 + 8 [dd]
  8,16   3    61746     3.961412364 10003  Q   R 1076040 + 8 [dd]
  8,16   3    61747     3.961412745 10003  M   R 1076040 + 8 [dd]
  8,16   3    61748     3.961428767 10003  Q   R 1076048 + 8 [dd]
  8,16   3    61749     3.961429169 10003  M   R 1076048 + 8 [dd]
  8,16   3    61750     3.961444712 10003  Q   R 1076056 + 8 [dd]
  8,16   3    61751     3.961445081 10003  M   R 1076056 + 8 [dd]
  8,16   3    61752     3.961473247 10003  Q   R 1076064 + 8 [dd]
  8,16   3    61753     3.961473838 10003  M   R 1076064 + 8 [dd]
  8,16   3    61754     3.961489911 10003  Q   R 1076072 + 8 [dd]
  8,16   3    61755     3.961490439 10003  M   R 1076072 + 8 [dd]
  8,16   3    61756     3.961505898 10003  Q   R 1076080 + 8 [dd]
  8,16   3    61757     3.961506411 10003  M   R 1076080 + 8 [dd]
  8,16   3    61758     3.961528562 10003  Q   R 1076088 + 8 [dd]
  8,16   3    61759     3.961529138 10003  M   R 1076088 + 8 [dd]
  8,16   3    61760     3.961544896 10003  Q   R 1076096 + 8 [dd]
  8,16   3    61761     3.961545265 10003  M   R 1076096 + 8 [dd]
  8,16   3    61762     3.961560988 10003  Q   R 1076104 + 8 [dd]
  8,16   3    61763     3.961561384 10003  M   R 1076104 + 8 [dd]
  8,16   3    61764     3.961577124 10003  Q   R 1076112 + 8 [dd]
  8,16   3    61765     3.961577490 10003  M   R 1076112 + 8 [dd]
  8,16   3    61766     3.961594118 10003  Q   R 1076120 + 8 [dd]
  8,16   3    61767     3.961594496 10003  M   R 1076120 + 8 [dd]
  8,16   3    61768     3.961617035 10003  Q   R 1076128 + 8 [dd]
  8,16   3    61769     3.961617560 10003  M   R 1076128 + 8 [dd]
  8,16   3    61770     3.961633127 10003  Q   R 1076136 + 8 [dd]
  8,16   3    61771     3.961633529 10003  M   R 1076136 + 8 [dd]
  8,16   3    61772     3.961649077 10003  Q   R 1076144 + 8 [dd]
  8,16   3    61773     3.961649470 10003  M   R 1076144 + 8 [dd]
  8,16   3    61774     3.961664952 10003  Q   R 1076152 + 8 [dd]
  8,16   3    61775     3.961665315 10003  M   R 1076152 + 8 [dd]
  8,16   3    61776     3.961680813 10003  Q   R 1076160 + 8 [dd]
  8,16   3    61777     3.961681200 10003  M   R 1076160 + 8 [dd]
  8,16   3    61778     3.961696619 10003  Q   R 1076168 + 8 [dd]
  8,16   3    61779     3.961696985 10003  M   R 1076168 + 8 [dd]
  8,16   3    61780     3.961712216 10003  Q   R 1076176 + 8 [dd]
  8,16   3    61781     3.961712582 10003  M   R 1076176 + 8 [dd]
  8,16   3    61782     3.961728031 10003  Q   R 1076184 + 8 [dd]
  8,16   3    61783     3.961728487 10003  M   R 1076184 + 8 [dd]
  8,16   3    61784     3.961743952 10003  Q   R 1076192 + 8 [dd]
  8,16   3    61785     3.961744372 10003  M   R 1076192 + 8 [dd]
  8,16   3    61786     3.961759827 10003  Q   R 1076200 + 8 [dd]
  8,16   3    61787     3.961760220 10003  M   R 1076200 + 8 [dd]
  8,16   3    61788     3.961775486 10003  Q   R 1076208 + 8 [dd]
  8,16   3    61789     3.961775873 10003  M   R 1076208 + 8 [dd]
  8,16   3    61790     3.961803946 10003  Q   R 1076216 + 8 [dd]
  8,16   3    61791     3.961804570 10003  M   R 1076216 + 8 [dd]
  8,16   3    61792     3.961805230 10003  U   N [dd] 1
  8,16   3    61793     3.961806406 10003  D   R 1075968 + 256 [dd]
  8,16   3    61794     3.961950914 10003  Q   R 1076224 + 8 [dd]
  8,16   3    61795     3.961960013 10003  G   R 1076224 + 8 [dd]
  8,16   3    61796     3.961961102 10003  P   N [dd]
  8,16   3    61797     3.961961627 10003  I   R 1076224 + 8 [dd]
  8,16   3    61798     3.961985011 10003  Q   R 1076232 + 8 [dd]
  8,16   3    61799     3.961985590 10003  M   R 1076232 + 8 [dd]
  8,16   3    61800     3.962001501 10003  Q   R 1076240 + 8 [dd]
  8,16   3    61801     3.962001879 10003  M   R 1076240 + 8 [dd]
  8,16   3    61802     3.962017764 10003  Q   R 1076248 + 8 [dd]
  8,16   3    61803     3.962018130 10003  M   R 1076248 + 8 [dd]
  8,16   3    61804     3.962033915 10003  Q   R 1076256 + 8 [dd]
  8,16   3    61805     3.962034290 10003  M   R 1076256 + 8 [dd]
  8,16   3    61806     3.962049986 10003  Q   R 1076264 + 8 [dd]
  8,16   3    61807     3.962050370 10003  M   R 1076264 + 8 [dd]
  8,16   3    61808     3.962072302 10003  Q   R 1076272 + 8 [dd]
  8,16   3    61809     3.962072863 10003  M   R 1076272 + 8 [dd]
  8,16   3    61810     3.962088501 10003  Q   R 1076280 + 8 [dd]
  8,16   3    61811     3.962089023 10003  M   R 1076280 + 8 [dd]
  8,16   3    61812     3.962104443 10003  Q   R 1076288 + 8 [dd]
  8,16   3    61813     3.962104800 10003  M   R 1076288 + 8 [dd]
  8,16   3    61814     3.962120105 10003  Q   R 1076296 + 8 [dd]
  8,16   3    61815     3.962120474 10003  M   R 1076296 + 8 [dd]
  8,16   3    61816     3.962135924 10003  Q   R 1076304 + 8 [dd]
  8,16   3    61817     3.962136287 10003  M   R 1076304 + 8 [dd]
  8,16   3    61818     3.962151616 10003  Q   R 1076312 + 8 [dd]
  8,16   3    61819     3.962151985 10003  M   R 1076312 + 8 [dd]
  8,16   3    61820     3.962167540 10003  Q   R 1076320 + 8 [dd]
  8,16   3    61821     3.962167909 10003  M   R 1076320 + 8 [dd]
  8,16   3    61822     3.962183343 10003  Q   R 1076328 + 8 [dd]
  8,16   3    61823     3.962183751 10003  M   R 1076328 + 8 [dd]
  8,16   3    61824     3.962199383 10003  Q   R 1076336 + 8 [dd]
  8,16   3    61825     3.962199845 10003  M   R 1076336 + 8 [dd]
  8,16   3    61826     3.962229163 10003  Q   R 1076344 + 8 [dd]
  8,16   3    61827     3.962229757 10003  M   R 1076344 + 8 [dd]
  8,16   3    61828     3.962245600 10003  Q   R 1076352 + 8 [dd]
  8,16   3    61829     3.962246077 10003  M   R 1076352 + 8 [dd]
  8,16   3    61830     3.962261946 10003  Q   R 1076360 + 8 [dd]
  8,16   3    61831     3.962262324 10003  M   R 1076360 + 8 [dd]
  8,16   3    61832     3.962277924 10003  Q   R 1076368 + 8 [dd]
  8,16   3    61833     3.962278305 10003  M   R 1076368 + 8 [dd]
  8,16   3    61834     3.962293790 10003  Q   R 1076376 + 8 [dd]
  8,16   3    61835     3.962294162 10003  M   R 1076376 + 8 [dd]
  8,16   1    71105     3.962307266     0  C   R 1075968 + 256 [0]
  8,16   3    61836     3.962309822 10003  Q   R 1076384 + 8 [dd]
  8,16   3    61837     3.962310188 10003  M   R 1076384 + 8 [dd]
  8,16   3    61838     3.962326702 10003  Q   R 1076392 + 8 [dd]
  8,16   3    61839     3.962327071 10003  M   R 1076392 + 8 [dd]
  8,16   3    61840     3.962342832 10003  Q   R 1076400 + 8 [dd]
  8,16   3    61841     3.962343189 10003  M   R 1076400 + 8 [dd]
  8,16   3    61842     3.962359344 10003  Q   R 1076408 + 8 [dd]
  8,16   3    61843     3.962359704 10003  M   R 1076408 + 8 [dd]
  8,16   3    61844     3.962376656 10003  Q   R 1076416 + 8 [dd]
  8,16   3    61845     3.962377031 10003  M   R 1076416 + 8 [dd]
  8,16   3    61846     3.962393093 10003  Q   R 1076424 + 8 [dd]
  8,16   3    61847     3.962393459 10003  M   R 1076424 + 8 [dd]
  8,16   3    61848     3.962409892 10003  Q   R 1076432 + 8 [dd]
  8,16   3    61849     3.962410363 10003  M   R 1076432 + 8 [dd]
  8,16   3    61850     3.962426088 10003  Q   R 1076440 + 8 [dd]
  8,16   3    61851     3.962426448 10003  M   R 1076440 + 8 [dd]
  8,16   3    61852     3.962443428 10003  Q   R 1076448 + 8 [dd]
  8,16   3    61853     3.962443800 10003  M   R 1076448 + 8 [dd]
  8,16   1    71106     3.962457491     0  D   R 1076224 + 232 [kworker/0:0]
  8,16   3    61854     3.962496835 10003  Q   R 1076456 + 8 [dd]
  8,16   3    61855     3.962506030 10003  G   R 1076456 + 8 [dd]
  8,16   3    61856     3.962507134 10003  P   N [dd]
  8,16   3    61857     3.962507616 10003  I   R 1076456 + 8 [dd]
  8,16   3    61858     3.962523519 10003  Q   R 1076464 + 8 [dd]
  8,16   3    61859     3.962524062 10003  M   R 1076464 + 8 [dd]
  8,16   3    61860     3.962539793 10003  Q   R 1076472 + 8 [dd]
  8,16   3    61861     3.962540177 10003  M   R 1076472 + 8 [dd]
  8,16   3    61862     3.962540849 10003  U   N [dd] 2
  8,16   3    61863     3.962541920 10003  D   R 1076456 + 24 [dd]
  8,16   3    61864     3.962705919 10003  Q   R 1076480 + 8 [dd]
  8,16   3    61865     3.962714862 10003  G   R 1076480 + 8 [dd]
  8,16   3    61866     3.962715951 10003  P   N [dd]
  8,16   3    61867     3.962716557 10003  I   R 1076480 + 8 [dd]
  8,16   3    61868     3.962732999 10003  Q   R 1076488 + 8 [dd]
  8,16   3    61869     3.962733497 10003  M   R 1076488 + 8 [dd]
  8,16   3    61870     3.962749165 10003  Q   R 1076496 + 8 [dd]
  8,16   3    61871     3.962749606 10003  M   R 1076496 + 8 [dd]
  8,16   3    61872     3.962765311 10003  Q   R 1076504 + 8 [dd]
  8,16   3    61873     3.962765668 10003  M   R 1076504 + 8 [dd]
  8,16   3    61874     3.962781267 10003  Q   R 1076512 + 8 [dd]
  8,16   3    61875     3.962781618 10003  M   R 1076512 + 8 [dd]
  8,16   3    61876     3.962797338 10003  Q   R 1076520 + 8 [dd]
  8,16   3    61877     3.962797704 10003  M   R 1076520 + 8 [dd]
  8,16   3    61878     3.962813105 10003  Q   R 1076528 + 8 [dd]
  8,16   3    61879     3.962813456 10003  M   R 1076528 + 8 [dd]
  8,16   3    61880     3.962828852 10003  Q   R 1076536 + 8 [dd]
  8,16   3    61881     3.962829206 10003  M   R 1076536 + 8 [dd]
  8,16   3    61882     3.962844448 10003  Q   R 1076544 + 8 [dd]
  8,16   3    61883     3.962844814 10003  M   R 1076544 + 8 [dd]
  8,16   3    61884     3.962860272 10003  Q   R 1076552 + 8 [dd]
  8,16   3    61885     3.962860626 10003  M   R 1076552 + 8 [dd]
  8,16   3    61886     3.962875914 10003  Q   R 1076560 + 8 [dd]
  8,16   3    61887     3.962876274 10003  M   R 1076560 + 8 [dd]
  8,16   3    61888     3.962892551 10003  Q   R 1076568 + 8 [dd]
  8,16   3    61889     3.962892926 10003  M   R 1076568 + 8 [dd]
  8,16   3    61890     3.962908244 10003  Q   R 1076576 + 8 [dd]
  8,16   3    61891     3.962908616 10003  M   R 1076576 + 8 [dd]
  8,16   3    61892     3.962924212 10003  Q   R 1076584 + 8 [dd]
  8,16   3    61893     3.962924578 10003  M   R 1076584 + 8 [dd]
  8,16   1    71107     3.962931385     0  C   R 1076224 + 232 [0]
  8,16   3    61894     3.962941147 10003  Q   R 1076592 + 8 [dd]
  8,16   3    61895     3.962941513 10003  M   R 1076592 + 8 [dd]
  8,16   3    61896     3.962957970 10003  Q   R 1076600 + 8 [dd]
  8,16   3    61897     3.962958354 10003  M   R 1076600 + 8 [dd]
  8,16   3    61898     3.962975189 10003  Q   R 1076608 + 8 [dd]
  8,16   3    61899     3.962975660 10003  M   R 1076608 + 8 [dd]
  8,16   3    61900     3.963005020 10003  Q   R 1076616 + 8 [dd]
  8,16   3    61901     3.963005614 10003  M   R 1076616 + 8 [dd]
  8,16   3    61902     3.963022708 10003  Q   R 1076624 + 8 [dd]
  8,16   3    61903     3.963023146 10003  M   R 1076624 + 8 [dd]
  8,16   3    61904     3.963044361 10003  Q   R 1076632 + 8 [dd]
  8,16   3    61905     3.963044736 10003  M   R 1076632 + 8 [dd]
  8,16   1    71108     3.963058451     0  D   R 1076480 + 160 [kworker/0:0]
  8,16   3    61906     3.963084293 10003  Q   R 1076640 + 8 [dd]
  8,16   1    71109     3.963087622     0  C   R 1076456 + 24 [0]
  8,16   3    61907     3.963093640 10003  G   R 1076640 + 8 [dd]
  8,16   3    61908     3.963094888 10003  P   N [dd]
  8,16   3    61909     3.963095470 10003  I   R 1076640 + 8 [dd]
  8,16   3    61910     3.963111634 10003  Q   R 1076648 + 8 [dd]
  8,16   3    61911     3.963112060 10003  M   R 1076648 + 8 [dd]
  8,16   3    61912     3.963128220 10003  Q   R 1076656 + 8 [dd]
  8,16   3    61913     3.963128667 10003  M   R 1076656 + 8 [dd]
  8,16   1    71110     3.963132804     0  D   R 1076640 + 24 [kworker/0:0]
  8,16   3    61914     3.963156872 10003  Q   R 1076664 + 8 [dd]
  8,16   3    61915     3.963165929 10003  G   R 1076664 + 8 [dd]
  8,16   3    61916     3.963167159 10003  P   N [dd]
  8,16   3    61917     3.963167633 10003  I   R 1076664 + 8 [dd]
  8,16   3    61918     3.963183400 10003  Q   R 1076672 + 8 [dd]
  8,16   3    61919     3.963183901 10003  M   R 1076672 + 8 [dd]
  8,16   3    61920     3.963200862 10003  Q   R 1076680 + 8 [dd]
  8,16   3    61921     3.963201231 10003  M   R 1076680 + 8 [dd]
  8,16   3    61922     3.963216849 10003  Q   R 1076688 + 8 [dd]
  8,16   3    61923     3.963217233 10003  M   R 1076688 + 8 [dd]
  8,16   3    61924     3.963232889 10003  Q   R 1076696 + 8 [dd]
  8,16   3    61925     3.963233258 10003  M   R 1076696 + 8 [dd]
  8,16   3    61926     3.963248687 10003  Q   R 1076704 + 8 [dd]
  8,16   3    61927     3.963249134 10003  M   R 1076704 + 8 [dd]
  8,16   3    61928     3.963264649 10003  Q   R 1076712 + 8 [dd]
  8,16   3    61929     3.963265006 10003  M   R 1076712 + 8 [dd]
  8,16   3    61930     3.963280495 10003  Q   R 1076720 + 8 [dd]
  8,16   3    61931     3.963280879 10003  M   R 1076720 + 8 [dd]
  8,16   3    61932     3.963296244 10003  Q   R 1076728 + 8 [dd]
  8,16   3    61933     3.963296616 10003  M   R 1076728 + 8 [dd]
  8,16   3    61934     3.963297285 10003  U   N [dd] 3
  8,16   3    61935     3.963298347 10003  D   R 1076664 + 72 [dd]
  8,16   1    71111     3.963390225     0  C   R 1076480 + 160 [0]
  8,16   3    61936     3.963434731 10003  Q   R 1076736 + 8 [dd]
  8,16   3    61937     3.963443827 10003  G   R 1076736 + 8 [dd]
  8,16   3    61938     3.963445093 10003  P   N [dd]
  8,16   3    61939     3.963445624 10003  I   R 1076736 + 8 [dd]
  8,16   1    71112     3.963477720     0  D   R 1076736 + 8 [kworker/0:0]
  8,16   3    61940     3.963501059 10003  Q   R 1076744 + 8 [dd]
  8,16   1    71113     3.963504212     0  C   R 1076640 + 24 [0]
  8,16   3    61941     3.963511372 10003  G   R 1076744 + 8 [dd]
  8,16   3    61942     3.963512500 10003  P   N [dd]
  8,16   3    61943     3.963513142 10003  I   R 1076744 + 8 [dd]
  8,16   3    61944     3.963529300 10003  Q   R 1076752 + 8 [dd]
  8,16   3    61945     3.963529768 10003  M   R 1076752 + 8 [dd]
  8,16   1    71114     3.963536556     0  D   R 1076744 + 16 [kworker/0:0]
  8,16   3    61946     3.963559893 10003  Q   R 1076760 + 8 [dd]
  8,16   3    61947     3.963568922 10003  G   R 1076760 + 8 [dd]
  8,16   3    61948     3.963570023 10003  P   N [dd]
  8,16   3    61949     3.963570518 10003  I   R 1076760 + 8 [dd]
  8,16   3    61950     3.963586067 10003  Q   R 1076768 + 8 [dd]
  8,16   3    61951     3.963586592 10003  M   R 1076768 + 8 [dd]
  8,16   3    61952     3.963651750 10003  Q   R 1076776 + 8 [dd]
  8,16   3    61953     3.963652311 10003  M   R 1076776 + 8 [dd]
  8,16   1    71115     3.963653739     0  C   R 1076664 + 72 [0]
  8,16   3    61954     3.963670631 10003  Q   R 1076784 + 8 [dd]
  8,16   3    61955     3.963671006 10003  M   R 1076784 + 8 [dd]
  8,16   3    61956     3.963689498 10003  Q   R 1076792 + 8 [dd]
  8,16   3    61957     3.963689897 10003  M   R 1076792 + 8 [dd]
  8,16   3    61958     3.963706474 10003  Q   R 1076800 + 8 [dd]
  8,16   3    61959     3.963706834 10003  M   R 1076800 + 8 [dd]
  8,16   1    71116     3.963713800     0  D   R 1076760 + 48 [kworker/0:0]
  8,16   3    61960     3.963739902 10003  Q   R 1076808 + 8 [dd]
  8,16   1    71117     3.963744588     0  C   R 1076736 + 8 [0]
  8,16   3    61961     3.963749246 10003  G   R 1076808 + 8 [dd]
  8,16   3    61962     3.963750437 10003  P   N [dd]
  8,16   3    61963     3.963751049 10003  I   R 1076808 + 8 [dd]
  8,16   3    61964     3.963767303 10003  Q   R 1076816 + 8 [dd]
  8,16   3    61965     3.963767729 10003  M   R 1076816 + 8 [dd]
  8,16   1    71118     3.963771647     0  D   R 1076808 + 16 [kworker/0:0]
  8,16   3    61966     3.963796708 10003  Q   R 1076824 + 8 [dd]
  8,16   1    71119     3.963799219     0  C   R 1076744 + 16 [0]
  8,16   3    61967     3.963805917 10003  G   R 1076824 + 8 [dd]
  8,16   3    61968     3.963807078 10003  P   N [dd]
  8,16   3    61969     3.963807684 10003  I   R 1076824 + 8 [dd]
  8,16   3    61970     3.963824088 10003  Q   R 1076832 + 8 [dd]
  8,16   3    61971     3.963824595 10003  M   R 1076832 + 8 [dd]
  8,16   1    71120     3.963828153     0  D   R 1076824 + 16 [kworker/0:0]
  8,16   3    61972     3.963853169 10003  Q   R 1076840 + 8 [dd]
  8,16   1    71121     3.963857297     0  C   R 1076760 + 48 [0]
  8,16   3    61973     3.963862243 10003  G   R 1076840 + 8 [dd]
  8,16   3    61974     3.963863452 10003  P   N [dd]
  8,16   3    61975     3.963863914 10003  I   R 1076840 + 8 [dd]
  8,16   3    61976     3.963880243 10003  Q   R 1076848 + 8 [dd]
  8,16   3    61977     3.963880687 10003  M   R 1076848 + 8 [dd]
  8,16   3    61978     3.963896922 10003  Q   R 1076856 + 8 [dd]
  8,16   3    61979     3.963897324 10003  M   R 1076856 + 8 [dd]
  8,16   1    71122     3.963900924     0  D   R 1076840 + 24 [kworker/0:0]
  8,16   3    61980     3.963927272 10003  Q   R 1076864 + 8 [dd]
  8,16   1    71123     3.963931478     0  C   R 1076808 + 16 [0]
  8,16   3    61981     3.963936821 10003  G   R 1076864 + 8 [dd]
  8,16   3    61982     3.963937949 10003  P   N [dd]
  8,16   3    61983     3.963938549 10003  I   R 1076864 + 8 [dd]
  8,16   3    61984     3.963955120 10003  Q   R 1076872 + 8 [dd]
  8,16   3    61985     3.963955606 10003  M   R 1076872 + 8 [dd]
  8,16   1    71124     3.963960226     0  D   R 1076864 + 16 [kworker/0:0]
  8,16   3    61986     3.963985101 10003  Q   R 1076880 + 8 [dd]
  8,16   1    71125     3.963988317     0  C   R 1076824 + 16 [0]
  8,16   3    61987     3.963994233 10003  G   R 1076880 + 8 [dd]
  8,16   3    61988     3.963995358 10003  P   N [dd]
  8,16   3    61989     3.963996105 10003  I   R 1076880 + 8 [dd]
  8,16   3    61990     3.964012559 10003  Q   R 1076888 + 8 [dd]
  8,16   3    61991     3.964013075 10003  M   R 1076888 + 8 [dd]
  8,16   1    71126     3.964017533     0  D   R 1076880 + 16 [kworker/0:0]
  8,16   3    61992     3.964042510 10003  Q   R 1076896 + 8 [dd]
  8,16   1    71127     3.964045948     0  C   R 1076840 + 24 [0]
  8,16   3    61993     3.964051489 10003  G   R 1076896 + 8 [dd]
  8,16   3    61994     3.964052806 10003  P   N [dd]
  8,16   3    61995     3.964053256 10003  I   R 1076896 + 8 [dd]
  8,16   3    61996     3.964072269 10003  Q   R 1076904 + 8 [dd]
  8,16   3    61997     3.964072719 10003  M   R 1076904 + 8 [dd]
  8,16   1    71128     3.964079379     0  D   R 1076896 + 16 [kworker/0:0]
  8,16   3    61998     3.964105172 10003  Q   R 1076912 + 8 [dd]
  8,16   1    71129     3.964109558     0  C   R 1076864 + 16 [0]
  8,16   3    61999     3.964114123 10003  G   R 1076912 + 8 [dd]
  8,16   3    62000     3.964115329 10003  P   N [dd]
  8,16   3    62001     3.964115914 10003  I   R 1076912 + 8 [dd]
  8,16   3    62002     3.964131979 10003  Q   R 1076920 + 8 [dd]
  8,16   3    62003     3.964132492 10003  M   R 1076920 + 8 [dd]
  8,16   1    71130     3.964136764     0  D   R 1076912 + 16 [kworker/0:0]
  8,16   3    62004     3.964175513 10003  Q   R 1076928 + 8 [dd]
  8,16   1    71131     3.964180046     0  C   R 1076880 + 16 [0]
  8,16   3    62005     3.964184729 10003  G   R 1076928 + 8 [dd]
  8,16   3    62006     3.964185896 10003  P   N [dd]
  8,16   3    62007     3.964186364 10003  I   R 1076928 + 8 [dd]
  8,16   3    62008     3.964202785 10003  Q   R 1076936 + 8 [dd]
  8,16   3    62009     3.964203235 10003  M   R 1076936 + 8 [dd]
  8,16   1    71132     3.964208746     0  D   R 1076928 + 16 [kworker/0:0]
  8,16   3    62010     3.964233873 10003  Q   R 1076944 + 8 [dd]
  8,16   1    71133     3.964238088     0  C   R 1076896 + 16 [0]
  8,16   3    62011     3.964242837 10003  G   R 1076944 + 8 [dd]
  8,16   3    62012     3.964244025 10003  P   N [dd]
  8,16   3    62013     3.964244451 10003  I   R 1076944 + 8 [dd]
  8,16   3    62014     3.964260785 10003  Q   R 1076952 + 8 [dd]
  8,16   3    62015     3.964261244 10003  M   R 1076952 + 8 [dd]
  8,16   1    71134     3.964265939     0  D   R 1076944 + 16 [kworker/0:0]
  8,16   3    62016     3.964291150 10003  Q   R 1076960 + 8 [dd]
  8,16   1    71135     3.964295125     0  C   R 1076912 + 16 [0]
  8,16   3    62017     3.964300201 10003  G   R 1076960 + 8 [dd]
  8,16   3    62018     3.964301290 10003  P   N [dd]
  8,16   3    62019     3.964301692 10003  I   R 1076960 + 8 [dd]
  8,16   3    62020     3.964318020 10003  Q   R 1076968 + 8 [dd]
  8,16   3    62021     3.964318449 10003  M   R 1076968 + 8 [dd]
  8,16   1    71136     3.964322631     0  D   R 1076960 + 16 [kworker/0:0]
  8,16   3    62022     3.964347845 10003  Q   R 1076976 + 8 [dd]
  8,16   1    71137     3.964352171     0  C   R 1076928 + 16 [0]
  8,16   3    62023     3.964356818 10003  G   R 1076976 + 8 [dd]
  8,16   3    62024     3.964358012 10003  P   N [dd]
  8,16   3    62025     3.964358447 10003  I   R 1076976 + 8 [dd]
  8,16   1    71138     3.964378849     0  D   R 1076976 + 8 [kworker/0:0]
  8,16   3    62026     3.964403805 10003  Q   R 1076984 + 8 [dd]
  8,16   1    71139     3.964407849     0  C   R 1076944 + 16 [0]
  8,16   3    62027     3.964413063 10003  G   R 1076984 + 8 [dd]
  8,16   3    62028     3.964414287 10003  P   N [dd]
  8,16   3    62029     3.964414758 10003  I   R 1076984 + 8 [dd]
  8,16   3    62030     3.964415418 10003  U   N [dd] 4
  8,16   3    62031     3.964416645 10003  D   R 1076984 + 8 [dd]
  8,16   1    71140     3.964458919     0  C   R 1076960 + 16 [0]
  8,16   1    71141     3.964487937     0  C   R 1076976 + 8 [0]
  8,16   1    71142     3.964515887     9  C   R 1076984 + 8 [0]
  8,16   3    62032     3.964558084 10003  Q   R 1076992 + 8 [dd]
  8,16   3    62033     3.964567284 10003  G   R 1076992 + 8 [dd]
  8,16   3    62034     3.964568361 10003  P   N [dd]
  8,16   3    62035     3.964568787 10003  I   R 1076992 + 8 [dd]
  8,16   3    62036     3.964584285 10003  Q   R 1077000 + 8 [dd]
  8,16   3    62037     3.964584759 10003  M   R 1077000 + 8 [dd]
  8,16   3    62038     3.964613555 10003  Q   R 1077008 + 8 [dd]
  8,16   3    62039     3.964614218 10003  M   R 1077008 + 8 [dd]
  8,16   3    62040     3.964631132 10003  Q   R 1077016 + 8 [dd]
  8,16   3    62041     3.964631690 10003  M   R 1077016 + 8 [dd]
  8,16   3    62042     3.964647205 10003  Q   R 1077024 + 8 [dd]
  8,16   3    62043     3.964647679 10003  M   R 1077024 + 8 [dd]
  8,16   3    62044     3.964663068 10003  Q   R 1077032 + 8 [dd]
  8,16   3    62045     3.964663446 10003  M   R 1077032 + 8 [dd]
  8,16   3    62046     3.964678566 10003  Q   R 1077040 + 8 [dd]
  8,16   3    62047     3.964678965 10003  M   R 1077040 + 8 [dd]
  8,16   3    62048     3.964694432 10003  Q   R 1077048 + 8 [dd]
  8,16   3    62049     3.964694792 10003  M   R 1077048 + 8 [dd]
  8,16   3    62050     3.964710047 10003  Q   R 1077056 + 8 [dd]
  8,16   3    62051     3.964710419 10003  M   R 1077056 + 8 [dd]
  8,16   3    62052     3.964725805 10003  Q   R 1077064 + 8 [dd]
  8,16   3    62053     3.964726195 10003  M   R 1077064 + 8 [dd]
  8,16   3    62054     3.964741528 10003  Q   R 1077072 + 8 [dd]
  8,16   3    62055     3.964741891 10003  M   R 1077072 + 8 [dd]
  8,16   3    62056     3.964757388 10003  Q   R 1077080 + 8 [dd]
  8,16   3    62057     3.964757745 10003  M   R 1077080 + 8 [dd]
  8,16   3    62058     3.964773183 10003  Q   R 1077088 + 8 [dd]
  8,16   3    62059     3.964773537 10003  M   R 1077088 + 8 [dd]
  8,16   3    62060     3.964788881 10003  Q   R 1077096 + 8 [dd]
  8,16   3    62061     3.964789271 10003  M   R 1077096 + 8 [dd]
  8,16   3    62062     3.964804621 10003  Q   R 1077104 + 8 [dd]
  8,16   3    62063     3.964804996 10003  M   R 1077104 + 8 [dd]
  8,16   3    62064     3.964821058 10003  Q   R 1077112 + 8 [dd]
  8,16   3    62065     3.964821559 10003  M   R 1077112 + 8 [dd]
  8,16   3    62066     3.964837593 10003  Q   R 1077120 + 8 [dd]
  8,16   3    62067     3.964838091 10003  M   R 1077120 + 8 [dd]
  8,16   3    62068     3.964854357 10003  Q   R 1077128 + 8 [dd]
  8,16   3    62069     3.964854711 10003  M   R 1077128 + 8 [dd]
  8,16   3    62070     3.964870229 10003  Q   R 1077136 + 8 [dd]
  8,16   3    62071     3.964870601 10003  M   R 1077136 + 8 [dd]
  8,16   3    62072     3.964886672 10003  Q   R 1077144 + 8 [dd]
  8,16   3    62073     3.964887062 10003  M   R 1077144 + 8 [dd]
  8,16   3    62074     3.964902367 10003  Q   R 1077152 + 8 [dd]
  8,16   3    62075     3.964902709 10003  M   R 1077152 + 8 [dd]
  8,16   3    62076     3.964918239 10003  Q   R 1077160 + 8 [dd]
  8,16   3    62077     3.964918593 10003  M   R 1077160 + 8 [dd]
  8,16   3    62078     3.964933959 10003  Q   R 1077168 + 8 [dd]
  8,16   3    62079     3.964934325 10003  M   R 1077168 + 8 [dd]
  8,16   3    62080     3.964950380 10003  Q   R 1077176 + 8 [dd]
  8,16   3    62081     3.964950758 10003  M   R 1077176 + 8 [dd]
  8,16   3    62082     3.964966094 10003  Q   R 1077184 + 8 [dd]
  8,16   3    62083     3.964966469 10003  M   R 1077184 + 8 [dd]
  8,16   3    62084     3.964988635 10003  Q   R 1077192 + 8 [dd]
  8,16   3    62085     3.964989256 10003  M   R 1077192 + 8 [dd]
  8,16   3    62086     3.965004834 10003  Q   R 1077200 + 8 [dd]
  8,16   3    62087     3.965005200 10003  M   R 1077200 + 8 [dd]
  8,16   3    62088     3.965021187 10003  Q   R 1077208 + 8 [dd]
  8,16   3    62089     3.965021565 10003  M   R 1077208 + 8 [dd]
  8,16   3    62090     3.965036933 10003  Q   R 1077216 + 8 [dd]
  8,16   3    62091     3.965037326 10003  M   R 1077216 + 8 [dd]
  8,16   3    62092     3.965052839 10003  Q   R 1077224 + 8 [dd]
  8,16   3    62093     3.965053214 10003  M   R 1077224 + 8 [dd]
  8,16   3    62094     3.965068618 10003  Q   R 1077232 + 8 [dd]
  8,16   3    62095     3.965069008 10003  M   R 1077232 + 8 [dd]
  8,16   3    62096     3.965085540 10003  Q   R 1077240 + 8 [dd]
  8,16   3    62097     3.965085912 10003  M   R 1077240 + 8 [dd]
  8,16   3    62098     3.965086503 10003  U   N [dd] 1
  8,16   3    62099     3.965087583 10003  D   R 1076992 + 256 [dd]
  8,16   3    62100     3.965233390 10003  Q   R 1077248 + 8 [dd]
  8,16   3    62101     3.965242420 10003  G   R 1077248 + 8 [dd]
  8,16   3    62102     3.965243458 10003  P   N [dd]
  8,16   3    62103     3.965243995 10003  I   R 1077248 + 8 [dd]
  8,16   3    62104     3.965266203 10003  Q   R 1077256 + 8 [dd]
  8,16   3    62105     3.965266782 10003  M   R 1077256 + 8 [dd]
  8,16   3    62106     3.965282348 10003  Q   R 1077264 + 8 [dd]
  8,16   3    62107     3.965282714 10003  M   R 1077264 + 8 [dd]
  8,16   3    62108     3.965297927 10003  Q   R 1077272 + 8 [dd]
  8,16   3    62109     3.965298323 10003  M   R 1077272 + 8 [dd]
  8,16   3    62110     3.965314132 10003  Q   R 1077280 + 8 [dd]
  8,16   3    62111     3.965314498 10003  M   R 1077280 + 8 [dd]
  8,16   3    62112     3.965329960 10003  Q   R 1077288 + 8 [dd]
  8,16   3    62113     3.965330329 10003  M   R 1077288 + 8 [dd]
  8,16   3    62114     3.965345628 10003  Q   R 1077296 + 8 [dd]
  8,16   3    62115     3.965345985 10003  M   R 1077296 + 8 [dd]
  8,16   3    62116     3.965361201 10003  Q   R 1077304 + 8 [dd]
  8,16   3    62117     3.965361558 10003  M   R 1077304 + 8 [dd]
  8,16   3    62118     3.965376863 10003  Q   R 1077312 + 8 [dd]
  8,16   3    62119     3.965377220 10003  M   R 1077312 + 8 [dd]
  8,16   3    62120     3.965392850 10003  Q   R 1077320 + 8 [dd]
  8,16   3    62121     3.965393219 10003  M   R 1077320 + 8 [dd]
  8,16   3    62122     3.965409508 10003  Q   R 1077328 + 8 [dd]
  8,16   3    62123     3.965409874 10003  M   R 1077328 + 8 [dd]
  8,16   3    62124     3.965426751 10003  Q   R 1077336 + 8 [dd]
  8,16   3    62125     3.965427117 10003  M   R 1077336 + 8 [dd]
CPU0 (sdb):
 Reads Queued:      44,197,  176,788KiB	 Writes Queued:           0,        0KiB
 Read Dispatches:    4,636,  167,460KiB	 Write Dispatches:        0,        0KiB
 Reads Requeued:         0		 Writes Requeued:         0
 Reads Completed:    4,595,  166,832KiB	 Writes Completed:        0,        0KiB
 Read Merges:       39,187,  156,748KiB	 Write Merges:            0,        0KiB
 Read depth:             4        	 Write depth:             0
 IO unplugs:         1,382        	 Timer unplugs:           0
CPU1 (sdb):
 Reads Queued:      26,136,  104,544KiB	 Writes Queued:           0,        0KiB
 Read Dispatches:    5,852,  175,160KiB	 Write Dispatches:        0,        0KiB
 Reads Requeued:         0		 Writes Requeued:         0
 Reads Completed:    6,664,  209,736KiB	 Writes Completed:        0,        0KiB
 Read Merges:       23,367,   93,468KiB	 Write Merges:            0,        0KiB
 Read depth:             4        	 Write depth:             0
 IO unplugs:           816        	 Timer unplugs:           0
CPU2 (sdb):
 Reads Queued:      37,989,  151,956KiB	 Writes Queued:           0,        0KiB
 Read Dispatches:    4,562,  161,256KiB	 Write Dispatches:        0,        0KiB
 Reads Requeued:         0		 Writes Requeued:         0
 Reads Completed:    4,604,  161,928KiB	 Writes Completed:        0,        0KiB
 Read Merges:       33,800,  135,200KiB	 Write Merges:            0,        0KiB
 Read depth:             4        	 Write depth:             0
 IO unplugs:         1,187        	 Timer unplugs:           0
CPU3 (sdb):
 Reads Queued:      26,346,  105,384KiB	 Writes Queued:           0,        0KiB
 Read Dispatches:      814,   34,748KiB	 Write Dispatches:        0,        0KiB
 Reads Requeued:         0		 Writes Requeued:         0
 Reads Completed:        0,        0KiB	 Writes Completed:        0,        0KiB
 Read Merges:       22,449,   89,796KiB	 Write Merges:            0,        0KiB
 Read depth:             4        	 Write depth:             0
 IO unplugs:           824        	 Timer unplugs:           1

Total (sdb):
 Reads Queued:     134,668,  538,672KiB	 Writes Queued:           0,        0KiB
 Read Dispatches:   15,864,  538,624KiB	 Write Dispatches:        0,        0KiB
 Reads Requeued:         0		 Writes Requeued:         0
 Reads Completed:   15,863,  538,496KiB	 Writes Completed:        0,        0KiB
 Read Merges:      118,803,  475,212KiB	 Write Merges:            0,        0KiB
 IO unplugs:         4,209        	 Timer unplugs:           1

Throughput (R/W): 135,812KiB/s / 0KiB/s
Events (sdb): 337,003 entries
Skips: 0 forward (0 -   0.0%)
Input file sdb.blktrace.0 added
Input file sdb.blktrace.1 added
Input file sdb.blktrace.2 added
Input file sdb.blktrace.3 added

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-02-01  9:18         ` Christoph Hellwig
@ 2012-02-01 20:10           ` Vivek Goyal
  2012-02-01 20:13             ` Jeff Moyer
  2012-02-01 20:22             ` Andrew Morton
  0 siblings, 2 replies; 28+ messages in thread
From: Vivek Goyal @ 2012-02-01 20:10 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Shaohua Li, lkml, linux-mm, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Wu Fengguang

On Wed, Feb 01, 2012 at 04:18:07AM -0500, Christoph Hellwig wrote:
> On Tue, Jan 31, 2012 at 10:36:53PM -0500, Vivek Goyal wrote:
> > I still see that IO is being submitted one page at a time. The only
> > real difference seems to be that queue unplug happening at random times
> > and many a times we are submitting much smaller requests (40 sectors, 48
> > sectors etc).
> 
> This is expected given that the block device node uses
> block_read_full_page, and not mpage_readpage(s).

What is the difference between block_read_full_page() and
mpage_readpage(). IOW, why block device does not use mpage_readpage(s)
interface?

Is enabling mpage_readpages() on block devices is as simple as following
patch or more is involved? (I suspect it has to be more than this. If it
was this simple, it would have been done by now).

This patch complies and seems to work. (system does not crash and dd
seems to be working. I can't verify the contents of the file though).

Applying following patch improved the speed from 110MB/s to more than
230MB/s.

# dd if=/dev/sdb of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.6269 s, 232 MB/s

---
 fs/block_dev.c |    7 +++++++
 1 file changed, 7 insertions(+)

Index: linux-2.6/fs/block_dev.c
===================================================================
--- linux-2.6.orig/fs/block_dev.c	2012-02-01 22:21:42.000000000 -0500
+++ linux-2.6/fs/block_dev.c	2012-02-02 01:52:40.000000000 -0500
@@ -347,6 +347,12 @@ static int blkdev_readpage(struct file *
 	return block_read_full_page(page, blkdev_get_block);
 }
 
+static int blkdev_readpages(struct file * file, struct address_space *mapping,
+		struct list_head *pages, unsigned nr_pages)
+{
+	return mpage_readpages(mapping, pages, nr_pages, blkdev_get_block);
+}
+
 static int blkdev_write_begin(struct file *file, struct address_space *mapping,
 			loff_t pos, unsigned len, unsigned flags,
 			struct page **pagep, void **fsdata)
@@ -1601,6 +1607,7 @@ static int blkdev_releasepage(struct pag
 
 static const struct address_space_operations def_blk_aops = {
 	.readpage	= blkdev_readpage,
+	.readpages	= blkdev_readpages,
 	.writepage	= blkdev_writepage,
 	.write_begin	= blkdev_write_begin,
 	.write_end	= blkdev_write_end,





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-02-01 20:10           ` Vivek Goyal
@ 2012-02-01 20:13             ` Jeff Moyer
  2012-02-01 20:22             ` Andrew Morton
  1 sibling, 0 replies; 28+ messages in thread
From: Jeff Moyer @ 2012-02-01 20:13 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Christoph Hellwig, Andrew Morton, Shaohua Li, lkml, linux-mm,
	Jens Axboe, Herbert Poetzl, Eric Dumazet, Wu Fengguang

Vivek Goyal <vgoyal@redhat.com> writes:

> On Wed, Feb 01, 2012 at 04:18:07AM -0500, Christoph Hellwig wrote:
>> On Tue, Jan 31, 2012 at 10:36:53PM -0500, Vivek Goyal wrote:
>> > I still see that IO is being submitted one page at a time. The only
>> > real difference seems to be that queue unplug happening at random times
>> > and many a times we are submitting much smaller requests (40 sectors, 48
>> > sectors etc).
>> 
>> This is expected given that the block device node uses
>> block_read_full_page, and not mpage_readpage(s).
>
> What is the difference between block_read_full_page() and
> mpage_readpage(). IOW, why block device does not use mpage_readpage(s)
> interface?
>
> Is enabling mpage_readpages() on block devices is as simple as following
> patch or more is involved? (I suspect it has to be more than this. If it
> was this simple, it would have been done by now).
>
> This patch complies and seems to work. (system does not crash and dd
> seems to be working. I can't verify the contents of the file though).
>
> Applying following patch improved the speed from 110MB/s to more than
> 230MB/s.
>
> # dd if=/dev/sdb of=/dev/null bs=1M count=1K
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 4.6269 s, 232 MB/s

See:
commit db2dbb12dc47a50c7a4c5678f526014063e486f6
Author: Jeff Moyer <jmoyer@redhat.com>
Date:   Wed Apr 22 14:08:13 2009 +0200

    block: implement blkdev_readpages
    
    Doing a proper block dev ->readpages() speeds up the crazy dump(8)
    approach of using interleaved process IO.
    
    Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

And:

commit 172124e220f1854acc99ee394671781b8b5e2120
Author: Jens Axboe <jens.axboe@oracle.com>
Date:   Thu Jun 4 22:34:44 2009 +0200

    Revert "block: implement blkdev_readpages"
    
    This reverts commit db2dbb12dc47a50c7a4c5678f526014063e486f6.
    
    It apparently causes problems with partition table read-ahead
    on archs with large page sizes. Until that problem is diagnosed
    further, just drop the readpages support on block devices.
    
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

;-)

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] fix readahead pipeline break caused by block plug
  2012-02-01 20:10           ` Vivek Goyal
  2012-02-01 20:13             ` Jeff Moyer
@ 2012-02-01 20:22             ` Andrew Morton
  1 sibling, 0 replies; 28+ messages in thread
From: Andrew Morton @ 2012-02-01 20:22 UTC (permalink / raw)
  To: Vivek Goyal
  Cc: Christoph Hellwig, Shaohua Li, lkml, linux-mm, Jens Axboe,
	Herbert Poetzl, Eric Dumazet, Wu Fengguang

On Wed, 1 Feb 2012 15:10:17 -0500
Vivek Goyal <vgoyal@redhat.com> wrote:

> On Wed, Feb 01, 2012 at 04:18:07AM -0500, Christoph Hellwig wrote:
> > On Tue, Jan 31, 2012 at 10:36:53PM -0500, Vivek Goyal wrote:
> > > I still see that IO is being submitted one page at a time. The only
> > > real difference seems to be that queue unplug happening at random times
> > > and many a times we are submitting much smaller requests (40 sectors, 48
> > > sectors etc).
> > 
> > This is expected given that the block device node uses
> > block_read_full_page, and not mpage_readpage(s).
> 
> What is the difference between block_read_full_page() and
> mpage_readpage().

block_read_full_page() will attach buffer_heads to the page and will
perform IO via those buffer_heads.  mpage_readpage() feeds the page
directly to the BIO layer and leaves it without attached buffer_heads.

> IOW, why block device does not use mpage_readpage(s)
> interface?

We've tried it in the past and problems ensued.  A quick google search
for blkdev_readpages turns up stuff like
http://us.generation-nt.com/answer/patch-add-readpages-support-block-devices-help-201462802.html

> Applying following patch improved the speed from 110MB/s to more than
> 230MB/s.

Yeah.  It should be doable - it would be a matter of hunting down and
squishing the oddball corner cases.


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2012-02-01 20:22 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-31  7:59 [PATCH] fix readahead pipeline break caused by block plug Shaohua Li
2012-01-31  8:36 ` Christoph Hellwig
2012-01-31  8:48 ` Eric Dumazet
2012-01-31  8:50   ` Herbert Poetzl
2012-01-31  8:53   ` Shaohua Li
2012-01-31  9:17     ` Eric Dumazet
2012-01-31 10:20 ` Wu Fengguang
2012-01-31 10:34 ` Wu Fengguang
2012-01-31 10:46   ` Christoph Hellwig
2012-01-31 10:57     ` Wu Fengguang
2012-01-31 11:34       ` Christoph Hellwig
2012-01-31 11:42         ` Wu Fengguang
2012-01-31 11:57           ` Christoph Hellwig
2012-01-31 12:20             ` Wu Fengguang
2012-02-01  2:25   ` Shaohua Li
2012-01-31 14:47 ` Vivek Goyal
2012-01-31 20:23   ` Vivek Goyal
2012-01-31 22:03 ` Vivek Goyal
2012-01-31 22:13   ` Andrew Morton
2012-01-31 22:22     ` Vivek Goyal
2012-02-01  3:36       ` Vivek Goyal
2012-02-01  7:10         ` Wu Fengguang
2012-02-01 16:01           ` Vivek Goyal
2012-02-01  9:18         ` Christoph Hellwig
2012-02-01 20:10           ` Vivek Goyal
2012-02-01 20:13             ` Jeff Moyer
2012-02-01 20:22             ` Andrew Morton
2012-02-01  7:02   ` Wu Fengguang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).