linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: io performance...
  2006-01-16  7:35 io performance Max Waterman
@ 2006-01-16  7:32 ` Jeff V. Merkey
  2006-01-17 13:57   ` Jens Axboe
  2006-01-16  8:35 ` Pekka Enberg
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 30+ messages in thread
From: Jeff V. Merkey @ 2006-01-16  7:32 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 643 bytes --]

Max Waterman wrote:

> Hi,
>
> I've been referred to this list from the linux-raid list.
>
> I've been playing with a RAID system, trying to obtain best bandwidth
> from it.
>
> I've noticed that I consistently get better (read) numbers from kernel 
> 2.6.8
> than from later kernels.


To open the bottlenecks, the following works well.  Jens will shoot me 
for recommending this,
but it works well.  2.6.9 so far has the highest numbers with this fix.  
You can manually putz
around with these numbers, but they are an artificial constraint if you 
are using RAID technology
that caches ad elevators requests and consolidates them.


Jeff



[-- Attachment #2: blkdev.patch --]
[-- Type: text/x-patch, Size: 540 bytes --]


diff -Naur ./include/linux/blkdev.h ../linux-2.6.9/./include/linux/blkdev.h
--- ./include/linux/blkdev.h	2004-10-18 15:53:43.000000000 -0600
+++ ../linux-2.6.9/./include/linux/blkdev.h	2005-12-06 09:54:46.000000000 -0700
@@ -23,8 +23,10 @@
 typedef struct elevator_s elevator_t;
 struct request_pm_state;
 
-#define BLKDEV_MIN_RQ	4
-#define BLKDEV_MAX_RQ	128	/* Default maximum */
+//#define BLKDEV_MIN_RQ	4
+//#define BLKDEV_MAX_RQ	128	/* Default maximum */
+#define BLKDEV_MIN_RQ	4096
+#define BLKDEV_MAX_RQ	8192	/* Default maximum */
 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* io performance...
@ 2006-01-16  7:35 Max Waterman
  2006-01-16  7:32 ` Jeff V. Merkey
                   ` (4 more replies)
  0 siblings, 5 replies; 30+ messages in thread
From: Max Waterman @ 2006-01-16  7:35 UTC (permalink / raw)
  To: linux-kernel

Hi,

I've been referred to this list from the linux-raid list.

I've been playing with a RAID system, trying to obtain best bandwidth
from it.

I've noticed that I consistently get better (read) numbers from kernel 2.6.8
than from later kernels.

For example, I get 135MB/s on 2.6.8, but I typically get ~90MB/s on later
kernels.

I'm using this :

<http://www.sharcnet.ca/~hahn/iorate.c>

to measure the iorate. I'm using the debian distribution. The h/w is a MegaRAID
320-2. The array I'm measuring is a RAID0 of 4 Fujitsu Max3073NC 15Krpm drives.

The later kernels I've been using are :

2.6.12-1-686-smp
2.6.14-2-686-smp
2.6.15-1-686-smp

The kernel which gives us the best results is :

2.6.8-2-386

(note that it's not an smp kernel)

I'm testing on an otherwise idle system.

Any ideas to why this might be? Any other advice/help?

Thanks!

Max.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-16  7:35 io performance Max Waterman
  2006-01-16  7:32 ` Jeff V. Merkey
@ 2006-01-16  8:35 ` Pekka Enberg
  2006-01-17 17:06 ` Phillip Susi
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 30+ messages in thread
From: Pekka Enberg @ 2006-01-16  8:35 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

Hi,

On 1/16/06, Max Waterman <davidmaxwaterman+kernel@fastmail.co.uk> wrote:
> I've noticed that I consistently get better (read) numbers from kernel 2.6.8
> than from later kernels.

[snip]

> The later kernels I've been using are :
>
> 2.6.12-1-686-smp
> 2.6.14-2-686-smp
> 2.6.15-1-686-smp
>
> The kernel which gives us the best results is :
>
> 2.6.8-2-386
>
> Any ideas to why this might be? Any other advice/help?

It would be helpful if you could isolate the exact changeset that
introduces the regression. You can use git bisect for that. Please
refer to the following URL for details:
http://www.kernel.org/pub/software/scm/git/docs/howto/isolate-bugs-with-bisect.txt

Also note that changeset for pre 2.6.11-rc2 kernels are in
old-2.6-bkcvs git tree. If you are new to git, you can find a good
introduction here: http://linux.yyz.us/git-howto.html. Thanks.

                               Pekka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-16  7:32 ` Jeff V. Merkey
@ 2006-01-17 13:57   ` Jens Axboe
  2006-01-17 19:17     ` Jeff V. Merkey
  0 siblings, 1 reply; 30+ messages in thread
From: Jens Axboe @ 2006-01-17 13:57 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Max Waterman, linux-kernel

On Mon, Jan 16 2006, Jeff V. Merkey wrote:
> Max Waterman wrote:
> 
> >Hi,
> >
> >I've been referred to this list from the linux-raid list.
> >
> >I've been playing with a RAID system, trying to obtain best bandwidth
> >from it.
> >
> >I've noticed that I consistently get better (read) numbers from kernel 
> >2.6.8
> >than from later kernels.
> 
> 
> To open the bottlenecks, the following works well.  Jens will shoot me 
> for recommending this,
> but it works well.  2.6.9 so far has the highest numbers with this fix.  
> You can manually putz
> around with these numbers, but they are an artificial constraint if you 
> are using RAID technology
> that caches ad elevators requests and consolidates them.
> 
> 
> Jeff
> 
> 

> 
> diff -Naur ./include/linux/blkdev.h ../linux-2.6.9/./include/linux/blkdev.h
> --- ./include/linux/blkdev.h	2004-10-18 15:53:43.000000000 -0600
> +++ ../linux-2.6.9/./include/linux/blkdev.h	2005-12-06 09:54:46.000000000 -0700
> @@ -23,8 +23,10 @@
>  typedef struct elevator_s elevator_t;
>  struct request_pm_state;
>  
> -#define BLKDEV_MIN_RQ	4
> -#define BLKDEV_MAX_RQ	128	/* Default maximum */
> +//#define BLKDEV_MIN_RQ	4
> +//#define BLKDEV_MAX_RQ	128	/* Default maximum */
> +#define BLKDEV_MIN_RQ	4096
> +#define BLKDEV_MAX_RQ	8192	/* Default maximum */

Yeah I could shoot you. However I'm more interested in why this is
necessary, eg I'd like to see some numbers from you comparing:

- The stock settings
- Doing
        # echo 8192 > /sys/block/<dev>/queue/nr_requests
  for each drive you are accessing.
- The kernel with your patch.

If #2 and #3 don't provide very similar profiles/scores, then we have
something to look at.

The BLKDEV_MIN_RQ increase is just silly and wastes a huge amount of
memory for no good reason.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-16  7:35 io performance Max Waterman
  2006-01-16  7:32 ` Jeff V. Merkey
  2006-01-16  8:35 ` Pekka Enberg
@ 2006-01-17 17:06 ` Phillip Susi
  2006-01-18  7:24   ` Max Waterman
  2006-01-18  3:02 ` Max Waterman
  2006-01-19  0:48 ` Adrian Bunk
  4 siblings, 1 reply; 30+ messages in thread
From: Phillip Susi @ 2006-01-17 17:06 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

Did you direct the program to use O_DIRECT?  If not then I believe the 
problem you are seeing is that the generic block layer is not performing 
large enough readahead to keep all the disks in the array reading at 
once, because the stripe width is rather large.  What stripe factor did 
you format the array using?


I have a sata fakeraid at home of two drives using a stripe factor of 64 
KB.  If I don't issue O_DIRECT IO requests of at least 128 KB ( the 
stripe width ), then throughput drops significantly.  If I issue 
multiple async requests of smaller size that totals at least 128 KB, 
throughput also remains high.  If you only issue a single 32 KB request 
at a time, then two requests must go to one drive and be completed 
before the other drive gets any requests, so it remains idle a lot of 
the time. 

Max Waterman wrote:
> Hi,
>
> I've been referred to this list from the linux-raid list.
>
> I've been playing with a RAID system, trying to obtain best bandwidth
> from it.
>
> I've noticed that I consistently get better (read) numbers from kernel 
> 2.6.8
> than from later kernels.
>
> For example, I get 135MB/s on 2.6.8, but I typically get ~90MB/s on later
> kernels.
>
> I'm using this :
>
> <http://www.sharcnet.ca/~hahn/iorate.c>
>
> to measure the iorate. I'm using the debian distribution. The h/w is a 
> MegaRAID
> 320-2. The array I'm measuring is a RAID0 of 4 Fujitsu Max3073NC 
> 15Krpm drives.
>
> The later kernels I've been using are :
>
> 2.6.12-1-686-smp
> 2.6.14-2-686-smp
> 2.6.15-1-686-smp
>
> The kernel which gives us the best results is :
>
> 2.6.8-2-386
>
> (note that it's not an smp kernel)
>
> I'm testing on an otherwise idle system.
>
> Any ideas to why this might be? Any other advice/help?
>
> Thanks!
>
> Max.
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-17 13:57   ` Jens Axboe
@ 2006-01-17 19:17     ` Jeff V. Merkey
  0 siblings, 0 replies; 30+ messages in thread
From: Jeff V. Merkey @ 2006-01-17 19:17 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Max Waterman, linux-kernel

Jens Axboe wrote:

>On Mon, Jan 16 2006, Jeff V. Merkey wrote:
>  
>
>>Max Waterman wrote:
>>
>>    
>>
>>>Hi,
>>>
>>>I've been referred to this list from the linux-raid list.
>>>
>>>I've been playing with a RAID system, trying to obtain best bandwidth
>>>      
>>>
>>>from it.
>>    
>>
>>>I've noticed that I consistently get better (read) numbers from kernel 
>>>2.6.8
>>>than from later kernels.
>>>      
>>>
>>To open the bottlenecks, the following works well.  Jens will shoot me 
>>for recommending this,
>>but it works well.  2.6.9 so far has the highest numbers with this fix.  
>>You can manually putz
>>around with these numbers, but they are an artificial constraint if you 
>>are using RAID technology
>>that caches ad elevators requests and consolidates them.
>>
>>
>>Jeff
>>
>>
>>    
>>
>
>  
>
>>diff -Naur ./include/linux/blkdev.h ../linux-2.6.9/./include/linux/blkdev.h
>>--- ./include/linux/blkdev.h	2004-10-18 15:53:43.000000000 -0600
>>+++ ../linux-2.6.9/./include/linux/blkdev.h	2005-12-06 09:54:46.000000000 -0700
>>@@ -23,8 +23,10 @@
>> typedef struct elevator_s elevator_t;
>> struct request_pm_state;
>> 
>>-#define BLKDEV_MIN_RQ	4
>>-#define BLKDEV_MAX_RQ	128	/* Default maximum */
>>+//#define BLKDEV_MIN_RQ	4
>>+//#define BLKDEV_MAX_RQ	128	/* Default maximum */
>>+#define BLKDEV_MIN_RQ	4096
>>+#define BLKDEV_MAX_RQ	8192	/* Default maximum */
>>    
>>
>
>Yeah I could shoot you. However I'm more interested in why this is
>necessary, eg I'd like to see some numbers from you comparing:
>
>- The stock settings
>- Doing
>        # echo 8192 > /sys/block/<dev>/queue/nr_requests
>  for each drive you are accessing.
>- The kernel with your patch.
>
>If #2 and #3 don't provide very similar profiles/scores, then we have
>something to look at.
>
>The BLKDEV_MIN_RQ increase is just silly and wastes a huge amount of
>memory for no good reason.
>
>  
>
Yep. I build it into the kernel to save the trouble of sending it to 
proc. Jens recommendation
will work just fine. It has the same affect of increasing the max 
requests outstanding.

Jeff

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-16  7:35 io performance Max Waterman
                   ` (2 preceding siblings ...)
  2006-01-17 17:06 ` Phillip Susi
@ 2006-01-18  3:02 ` Max Waterman
  2006-01-18  4:30   ` Jeff V. Merkey
  2006-01-19  0:48 ` Adrian Bunk
  4 siblings, 1 reply; 30+ messages in thread
From: Max Waterman @ 2006-01-18  3:02 UTC (permalink / raw)
  To: linux-kernel

One further question. I get these messages 'in' dmesg :

sda: asking for cache data failed
sda: assuming drive cache: write through

How can I force it to be 'write back'?

Max.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  3:02 ` Max Waterman
@ 2006-01-18  4:30   ` Jeff V. Merkey
  2006-01-18  5:09     ` Max Waterman
  2006-01-18  9:21     ` Alan Cox
  0 siblings, 2 replies; 30+ messages in thread
From: Jeff V. Merkey @ 2006-01-18  4:30 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

Max Waterman wrote:

> One further question. I get these messages 'in' dmesg :
>
> sda: asking for cache data failed
> sda: assuming drive cache: write through
>
> How can I force it to be 'write back'?



Forcing write back is a very bad idea unless you have a battery backed 
up RAID controller.   

Jeff

>
> Max.
> -
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  5:09     ` Max Waterman
@ 2006-01-18  4:37       ` Jeff V. Merkey
  2006-01-18  7:06         ` Max Waterman
  0 siblings, 1 reply; 30+ messages in thread
From: Jeff V. Merkey @ 2006-01-18  4:37 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

Max Waterman wrote:

> Jeff V. Merkey wrote:
>
>> Max Waterman wrote:
>>
>>> One further question. I get these messages 'in' dmesg :
>>>
>>> sda: asking for cache data failed
>>> sda: assuming drive cache: write through
>>>
>>> How can I force it to be 'write back'?
>>
>>
>>
>>
>> Forcing write back is a very bad idea unless you have a battery 
>> backed up RAID controller.  
>
>
> We do.
>
> In any case, I wonder what the consequences of assuming 'write 
> through' when the array is configured as 'write back'? Is it just 
> different settings for different caches?


It is.  This is something that should be configured in a RAID 
controller.  OS should always be write through.

Jeff

>
> Max.
>
>> Jeff
>>
>>>
>>> Max.
>>> -
>>> To unsubscribe from this list: send the line "unsubscribe 
>>> linux-kernel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at  http://www.tux.org/lkml/
>>>
>>
>
>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  4:30   ` Jeff V. Merkey
@ 2006-01-18  5:09     ` Max Waterman
  2006-01-18  4:37       ` Jeff V. Merkey
  2006-01-18  9:21     ` Alan Cox
  1 sibling, 1 reply; 30+ messages in thread
From: Max Waterman @ 2006-01-18  5:09 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel

Jeff V. Merkey wrote:
> Max Waterman wrote:
> 
>> One further question. I get these messages 'in' dmesg :
>>
>> sda: asking for cache data failed
>> sda: assuming drive cache: write through
>>
>> How can I force it to be 'write back'?
> 
> 
> 
> Forcing write back is a very bad idea unless you have a battery backed 
> up RAID controller.  

We do.

In any case, I wonder what the consequences of assuming 'write through' 
when the array is configured as 'write back'? Is it just different 
settings for different caches?

Max.

> Jeff
> 
>>
>> Max.
>> -
>> To unsubscribe from this list: send the line "unsubscribe 
>> linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  4:37       ` Jeff V. Merkey
@ 2006-01-18  7:06         ` Max Waterman
  0 siblings, 0 replies; 30+ messages in thread
From: Max Waterman @ 2006-01-18  7:06 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel

Jeff V. Merkey wrote:
> Max Waterman wrote:
> 
>> Jeff V. Merkey wrote:
>>
>>> Max Waterman wrote:
>>>
>>>> One further question. I get these messages 'in' dmesg :
>>>>
>>>> sda: asking for cache data failed
>>>> sda: assuming drive cache: write through
>>>>
>>>> How can I force it to be 'write back'?
>>>
>>>
>>>
>>>
>>> Forcing write back is a very bad idea unless you have a battery 
>>> backed up RAID controller.  
>>
>>
>> We do.
>>
>> In any case, I wonder what the consequences of assuming 'write 
>> through' when the array is configured as 'write back'? Is it just 
>> different settings for different caches?
> 
> 
> It is.  This is something that should be configured in a RAID 
> controller.  OS should always be write through.

Ok, thanks for clearing that up, though I now wonder why the message is 
there.

<shrug>

Max.

> 
> Jeff
> 
>>
>> Max.
>>
>>> Jeff
>>>
>>>>
>>>> Max.
>>>> -
>>>> To unsubscribe from this list: send the line "unsubscribe 
>>>> linux-kernel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>> Please read the FAQ at  http://www.tux.org/lkml/
>>>>
>>>
>>
>>
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-17 17:06 ` Phillip Susi
@ 2006-01-18  7:24   ` Max Waterman
  2006-01-18 15:19     ` Phillip Susi
  0 siblings, 1 reply; 30+ messages in thread
From: Max Waterman @ 2006-01-18  7:24 UTC (permalink / raw)
  To: Phillip Susi; +Cc: linux-kernel

Phillip Susi wrote:
> Did you direct the program to use O_DIRECT?

I'm just using the s/w (iorate/bonnie++) with default options - I'm no 
expert. I could try though.

> If not then I believe the 
> problem you are seeing is that the generic block layer is not performing 
> large enough readahead to keep all the disks in the array reading at 
> once, because the stripe width is rather large.  What stripe factor did 
> you format the array using?

I left the stripe size at the default, which, I believe, is 64K bytes; 
same as your fakeraid below.

I did play with 'blockdev --setra' too.

I noticed it was 256 with a single disk, and, with s/w raid, it 
increased by 256 for each extra disk in the array. IE for the raid 0 
array with 4 drives, it was 1024.

With h/w raid, however, it did not increase when I added disks. Should I 
use 'blockdev --setra 320' (ie 64 x 5 = 320, since we're now running 
RAID5 on 5 drives)?

> I have a sata fakeraid at home of two drives using a stripe factor of 64 
> KB.  If I don't issue O_DIRECT IO requests of at least 128 KB ( the 
> stripe width ), then throughput drops significantly.  If I issue 
> multiple async requests of smaller size that totals at least 128 KB, 
> throughput also remains high.  If you only issue a single 32 KB request 
> at a time, then two requests must go to one drive and be completed 
> before the other drive gets any requests, so it remains idle a lot of 
> the time.

I think that makes sense (which is a change in this RAID performance 
business :( ).

Thanks.

Max.

> Max Waterman wrote:
>> Hi,
>>
>> I've been referred to this list from the linux-raid list.
>>
>> I've been playing with a RAID system, trying to obtain best bandwidth
>> from it.
>>
>> I've noticed that I consistently get better (read) numbers from kernel 
>> 2.6.8
>> than from later kernels.
>>
>> For example, I get 135MB/s on 2.6.8, but I typically get ~90MB/s on later
>> kernels.
>>
>> I'm using this :
>>
>> <http://www.sharcnet.ca/~hahn/iorate.c>
>>
>> to measure the iorate. I'm using the debian distribution. The h/w is a 
>> MegaRAID
>> 320-2. The array I'm measuring is a RAID0 of 4 Fujitsu Max3073NC 
>> 15Krpm drives.
>>
>> The later kernels I've been using are :
>>
>> 2.6.12-1-686-smp
>> 2.6.14-2-686-smp
>> 2.6.15-1-686-smp
>>
>> The kernel which gives us the best results is :
>>
>> 2.6.8-2-386
>>
>> (note that it's not an smp kernel)
>>
>> I'm testing on an otherwise idle system.
>>
>> Any ideas to why this might be? Any other advice/help?
>>
>> Thanks!
>>
>> Max.
>> -
>> To unsubscribe from this list: send the line "unsubscribe 
>> linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>
>>
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  4:30   ` Jeff V. Merkey
  2006-01-18  5:09     ` Max Waterman
@ 2006-01-18  9:21     ` Alan Cox
  2006-01-18 15:48       ` Phillip Susi
  1 sibling, 1 reply; 30+ messages in thread
From: Alan Cox @ 2006-01-18  9:21 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Max Waterman, linux-kernel

On Maw, 2006-01-17 at 21:30 -0700, Jeff V. Merkey wrote:
> > How can I force it to be 'write back'?
> Forcing write back is a very bad idea unless you have a battery backed 
> up RAID controller.   

Not always. If you have a cache flush command and the OS knows about
using it, or if you don't care if the data gets lost over a power
failure (eg /tmp and swap) it makes sense to force it.

The raid controller drivers that fake scsi don't always fake enough of
scsi to report that they support cache flushes and the like. That
doesn't mean the controller itself is neccessarily doing one thing or
the other.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  7:24   ` Max Waterman
@ 2006-01-18 15:19     ` Phillip Susi
  2006-01-20  5:58       ` Max Waterman
  0 siblings, 1 reply; 30+ messages in thread
From: Phillip Susi @ 2006-01-18 15:19 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

Right, the kernel does not know how many disks are in the array, so it 
can't automatically increase the readahead.  I'd say increasing the 
readahead manually should solve your throughput issues.

Max Waterman wrote:
> 
> I left the stripe size at the default, which, I believe, is 64K bytes; 
> same as your fakeraid below.
> 
> I did play with 'blockdev --setra' too.
> 
> I noticed it was 256 with a single disk, and, with s/w raid, it 
> increased by 256 for each extra disk in the array. IE for the raid 0 
> array with 4 drives, it was 1024.
> 
> With h/w raid, however, it did not increase when I added disks. Should I 
> use 'blockdev --setra 320' (ie 64 x 5 = 320, since we're now running 
> RAID5 on 5 drives)?
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18  9:21     ` Alan Cox
@ 2006-01-18 15:48       ` Phillip Susi
  2006-01-18 16:25         ` Bartlomiej Zolnierkiewicz
  0 siblings, 1 reply; 30+ messages in thread
From: Phillip Susi @ 2006-01-18 15:48 UTC (permalink / raw)
  To: Alan Cox; +Cc: Jeff V. Merkey, Max Waterman, linux-kernel

I was going to say, doesn't the kernel set the FUA bit on the write 
request to push important flushes through the disk's write-back cache?  
Like for filesystem journal flushes?


Alan Cox wrote:
> Not always. If you have a cache flush command and the OS knows about
> using it, or if you don't care if the data gets lost over a power
> failure (eg /tmp and swap) it makes sense to force it.
>
> The raid controller drivers that fake scsi don't always fake enough of
> scsi to report that they support cache flushes and the like. That
> doesn't mean the controller itself is neccessarily doing one thing or
> the other.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
>
>   


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18 15:48       ` Phillip Susi
@ 2006-01-18 16:25         ` Bartlomiej Zolnierkiewicz
  0 siblings, 0 replies; 30+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2006-01-18 16:25 UTC (permalink / raw)
  To: Phillip Susi; +Cc: Alan Cox, Jeff V. Merkey, Max Waterman, linux-kernel

On 1/18/06, Phillip Susi <psusi@cfl.rr.com> wrote:
> I was going to say, doesn't the kernel set the FUA bit on the write
> request to push important flushes through the disk's write-back cache?
> Like for filesystem journal flushes?

Yes if:
* you have a disk supporting FUA
* you have kernel >= 2.6.16-rc1
* you are using SCSI (this includes libata) driver [ support for IDE driver
  will be merged later when races in changing IDE  settings are fixed ]

Bartlomiej

> Alan Cox wrote:
> > Not always. If you have a cache flush command and the OS knows about
> > using it, or if you don't care if the data gets lost over a power
> > failure (eg /tmp and swap) it makes sense to force it.
> >
> > The raid controller drivers that fake scsi don't always fake enough of
> > scsi to report that they support cache flushes and the like. That
> > doesn't mean the controller itself is neccessarily doing one thing or
> > the other.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-16  7:35 io performance Max Waterman
                   ` (3 preceding siblings ...)
  2006-01-18  3:02 ` Max Waterman
@ 2006-01-19  0:48 ` Adrian Bunk
  2006-01-19 13:18   ` Max Waterman
  4 siblings, 1 reply; 30+ messages in thread
From: Adrian Bunk @ 2006-01-19  0:48 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

On Mon, Jan 16, 2006 at 03:35:31PM +0800, Max Waterman wrote:
> Hi,
> 
> I've been referred to this list from the linux-raid list.
> 
> I've been playing with a RAID system, trying to obtain best bandwidth
> from it.
> 
> I've noticed that I consistently get better (read) numbers from kernel 2.6.8
> than from later kernels.
> 
> For example, I get 135MB/s on 2.6.8, but I typically get ~90MB/s on later
> kernels.
> 
> I'm using this :
> 
> <http://www.sharcnet.ca/~hahn/iorate.c>
> 
> to measure the iorate. I'm using the debian distribution. The h/w is a 
> MegaRAID
> 320-2. The array I'm measuring is a RAID0 of 4 Fujitsu Max3073NC 15Krpm 
> drives.
> 
> The later kernels I've been using are :
> 
> 2.6.12-1-686-smp
> 2.6.14-2-686-smp
> 2.6.15-1-686-smp
> 
> The kernel which gives us the best results is :
> 
> 2.6.8-2-386
> 
> (note that it's not an smp kernel)
> 
> I'm testing on an otherwise idle system.
> 
> Any ideas to why this might be? Any other advice/help?

You should try to narrow the problem a bit down.

Possible causes are:
- kernel regression between 2.6.8 and 2.6.12
- SMP <-> !SMP support
- patches and/or configuration changes in the Debian kernels

You should try self-compiled unmodified 2.6.8 and 2.6.12 ftp.kernel.org 
kernels with the same .config (modulo differences by "make oldconfig").

After this test, you know whether you are in the first case.
If yes, you could do a bisect search for finding the point where the 
regression started.

> Thanks!
> 
> Max.

cu
Adrian

-- 

       "Is there not promise of rain?" Ling Tan asked suddenly out
        of the darkness. There had been need of rain for many days.
       "Only a promise," Lao Er said.
                                       Pearl S. Buck - Dragon Seed


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-19  0:48 ` Adrian Bunk
@ 2006-01-19 13:18   ` Max Waterman
  0 siblings, 0 replies; 30+ messages in thread
From: Max Waterman @ 2006-01-19 13:18 UTC (permalink / raw)
  To: Adrian Bunk; +Cc: linux-kernel

Unfortunately, they don't want me to spend time doing this sort of 
thing, so I'm out of luck.

They're going to stick with 2.6.8-smp, which seems to give the best 
performance (which rules out your second case below, I suppose).

:|

Max.

Adrian Bunk wrote:
> On Mon, Jan 16, 2006 at 03:35:31PM +0800, Max Waterman wrote:
>> Hi,
>>
>> I've been referred to this list from the linux-raid list.
>>
>> I've been playing with a RAID system, trying to obtain best bandwidth
>> from it.
>>
>> I've noticed that I consistently get better (read) numbers from kernel 2.6.8
>> than from later kernels.
>>
>> For example, I get 135MB/s on 2.6.8, but I typically get ~90MB/s on later
>> kernels.
>>
>> I'm using this :
>>
>> <http://www.sharcnet.ca/~hahn/iorate.c>
>>
>> to measure the iorate. I'm using the debian distribution. The h/w is a 
>> MegaRAID
>> 320-2. The array I'm measuring is a RAID0 of 4 Fujitsu Max3073NC 15Krpm 
>> drives.
>>
>> The later kernels I've been using are :
>>
>> 2.6.12-1-686-smp
>> 2.6.14-2-686-smp
>> 2.6.15-1-686-smp
>>
>> The kernel which gives us the best results is :
>>
>> 2.6.8-2-386
>>
>> (note that it's not an smp kernel)
>>
>> I'm testing on an otherwise idle system.
>>
>> Any ideas to why this might be? Any other advice/help?
> 
> You should try to narrow the problem a bit down.
> 
> Possible causes are:
> - kernel regression between 2.6.8 and 2.6.12
> - SMP <-> !SMP support
> - patches and/or configuration changes in the Debian kernels
> 
> You should try self-compiled unmodified 2.6.8 and 2.6.12 ftp.kernel.org 
> kernels with the same .config (modulo differences by "make oldconfig").
> 
> After this test, you know whether you are in the first case.
> If yes, you could do a bisect search for finding the point where the 
> regression started.
> 
>> Thanks!
>>
>> Max.
> 
> cu
> Adrian
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-18 15:19     ` Phillip Susi
@ 2006-01-20  5:58       ` Max Waterman
  2006-01-20 13:42         ` Ian Soboroff
  0 siblings, 1 reply; 30+ messages in thread
From: Max Waterman @ 2006-01-20  5:58 UTC (permalink / raw)
  To: Phillip Susi; +Cc: linux-kernel

Phillip Susi wrote:
> Right, the kernel does not know how many disks are in the array, so it 
> can't automatically increase the readahead.  I'd say increasing the 
> readahead manually should solve your throughput issues.

Any guesses for a good number?

We're in RAID10 (2+2) at the moment on 2.6.8-smp. These are the block 
numbers I'm getting using bonnie++ :

ra	wr	rd
256	68K	46K
512	67K	59K
640	67K	64K
1024	66K	73K
2048	67K	88K
3072	67K	91K
8192	71K	96K
9216	67K	92K
16384	67K	90K

I think we might end up going for 8192.

We're still wondering why rd performance is so low - seems to be the 
same as a single drive. RAID10 should be the same performance as RAID0 
over two drives, shouldn't it?

Max.

> 
> Max Waterman wrote:
>>
>> I left the stripe size at the default, which, I believe, is 64K bytes; 
>> same as your fakeraid below.
>>
>> I did play with 'blockdev --setra' too.
>>
>> I noticed it was 256 with a single disk, and, with s/w raid, it 
>> increased by 256 for each extra disk in the array. IE for the raid 0 
>> array with 4 drives, it was 1024.
>>
>> With h/w raid, however, it did not increase when I added disks. Should 
>> I use 'blockdev --setra 320' (ie 64 x 5 = 320, since we're now running 
>> RAID5 on 5 drives)?
>>
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-20  5:58       ` Max Waterman
@ 2006-01-20 13:42         ` Ian Soboroff
  2006-01-25  6:36           ` Max Waterman
  2006-01-25 13:09           ` Bernd Eckenfels
  0 siblings, 2 replies; 30+ messages in thread
From: Ian Soboroff @ 2006-01-20 13:42 UTC (permalink / raw)
  To: linux-kernel

Max Waterman <davidmaxwaterman+kernel@fastmail.co.uk> writes:

> Phillip Susi wrote:
>> Right, the kernel does not know how many disks are in the array, so
>> it can't automatically increase the readahead.  I'd say increasing
>> the readahead manually should solve your throughput issues.
>
> Any guesses for a good number?
>
> We're in RAID10 (2+2) at the moment on 2.6.8-smp. These are the block
> numbers I'm getting using bonnie++ :
>
>[...]
> We're still wondering why rd performance is so low - seems to be the
> same as a single drive. RAID10 should be the same performance as RAID0
> over two drives, shouldn't it?

I think bonnie++ measures accesses to many small files (INN-like
simulation) and database accesses.  These are random accesses, which
is the worst access pattern for RAID.  Seek time in a RAID equals the
longest of all the drives in the RAID, rather than the average.  So
bonnie++ is domninated by your seek time.

Ian



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-20 13:42         ` Ian Soboroff
@ 2006-01-25  6:36           ` Max Waterman
  2006-01-25 14:19             ` Ian Soboroff
  2006-01-25 13:09           ` Bernd Eckenfels
  1 sibling, 1 reply; 30+ messages in thread
From: Max Waterman @ 2006-01-25  6:36 UTC (permalink / raw)
  To: Ian Soboroff; +Cc: linux-kernel

Ian Soboroff wrote:
> Max Waterman <davidmaxwaterman+kernel@fastmail.co.uk> writes:
> 
>> Phillip Susi wrote:
>>> Right, the kernel does not know how many disks are in the array, so
>>> it can't automatically increase the readahead.  I'd say increasing
>>> the readahead manually should solve your throughput issues.
>> Any guesses for a good number?
>>
>> We're in RAID10 (2+2) at the moment on 2.6.8-smp. These are the block
>> numbers I'm getting using bonnie++ :
>>
>> [...]
>> We're still wondering why rd performance is so low - seems to be the
>> same as a single drive. RAID10 should be the same performance as RAID0
>> over two drives, shouldn't it?
> 
> I think bonnie++ measures accesses to many small files (INN-like
> simulation) and database accesses.  These are random accesses, which
> is the worst access pattern for RAID.  Seek time in a RAID equals the
> longest of all the drives in the RAID, rather than the average.  So
> bonnie++ is domninated by your seek time.

You think so? I had assumed when bonnie++'s output said 'sequential 
access' that it was the opposite of random, for example (raid0 on 5 
drives) :

> +---------------------------------------------------------------------------------------------------------------------------------------------------+
> |                     |Sequential Output             |Sequential Input    |         |     |Sequential Create           |Random Create               |
> |---------------------+------------------------------+--------------------|Random   |-----+----------------------------+----------------------------|
> |          |Size:Chunk|Per Char |Block     |Rewrite  |Per Char |Block     |Seeks    |Num  |Create  |Read     |Delete   |Create  |Read     |Delete   |
> |          |Size      |         |          |         |         |          |         |Files|        |         |         |        |         |         |
> |---------------------+---------+----------+---------+---------+----------+---------+-----+--------+---------+---------+--------+---------+---------|
> |                     |K/sec|%  |K/sec |%  |K/sec|%  |K/sec|%  |K/sec |%  |/ sec|%  |     |/   |%  |/ sec|%  |/ sec|%  |/   |%  |/ sec|%  |/ sec|%  |
> |                     |     |CPU|      |CPU|     |CPU|     |CPU|      |CPU|     |CPU|     |sec |CPU|     |CPU|     |CPU|sec |CPU|     |CPU|     |CPU|
> |---------------------+-----+---+------+---+-----+---+-----+---+------+---+-----+---+-----+----+---+-----+---+-----+---+----+---+-----+---+-----+---|
> |hostname  |2G        |48024|96 |121412|13 |59714|10 |47844|95 |200264|21 |942.8|1  |16   |4146|99 |+++++|+++|+++++|+++|4167|99 |+++++|+++|14292|99 |
> +---------------------------------------------------------------------------------------------------------------------------------------------------+

Am I wrong? If so, what exactly does 'Sequential' mean in this context?

Max.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-20 13:42         ` Ian Soboroff
  2006-01-25  6:36           ` Max Waterman
@ 2006-01-25 13:09           ` Bernd Eckenfels
  1 sibling, 0 replies; 30+ messages in thread
From: Bernd Eckenfels @ 2006-01-25 13:09 UTC (permalink / raw)
  To: linux-kernel

Ian Soboroff <isoboroff@acm.org> wrote:
> simulation) and database accesses.  These are random accesses, which
> is the worst access pattern for RAID.  Seek time in a RAID equals the
> longest of all the drives in the RAID, rather than the average.

Well, actually it equals to the shortest seek time and it distributes the
seeks to multiple spindles (at least for raid1).

Gruss
Bernd

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-25  6:36           ` Max Waterman
@ 2006-01-25 14:19             ` Ian Soboroff
  0 siblings, 0 replies; 30+ messages in thread
From: Ian Soboroff @ 2006-01-25 14:19 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

Max Waterman <davidmaxwaterman@fastmail.co.uk> writes:

>>> We're still wondering why rd performance is so low - seems to be the
>>> same as a single drive. RAID10 should be the same performance as RAID0
>>> over two drives, shouldn't it?
>>>
>> I think bonnie++ measures accesses to many small files (INN-like
>> simulation) and database accesses.  These are random accesses, which
>> is the worst access pattern for RAID.  Seek time in a RAID equals the
>> longest of all the drives in the RAID, rather than the average.  So
>> bonnie++ is domninated by your seek time.
>
> You think so? I had assumed when bonnie++'s output said 'sequential
> access' that it was the opposite of random, for example (raid0 on 5
> drives) :
>

I could be wrong, I was just reading the information from the bonnie++
website... 

Ian


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-20  4:09           ` Max Waterman
  2006-01-20  4:27             ` Alexander Samad
@ 2006-01-20 12:52             ` Alan Cox
  1 sibling, 0 replies; 30+ messages in thread
From: Alan Cox @ 2006-01-20 12:52 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel

On Gwe, 2006-01-20 at 12:09 +0800, Max Waterman wrote:
> I'm not sure what difference it makes if the controller is battery 
> backed or not; if the drives are gone, then the card has nothing to 
> write to...will it make the writes when the power comes back on?

Yes it will, hopefully having checked first before writing. The higher
end ones you can even pull the battery backed ram module out, change the
raid card and it will do it.

Alan


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-20  4:09           ` Max Waterman
@ 2006-01-20  4:27             ` Alexander Samad
  2006-01-20 12:52             ` Alan Cox
  1 sibling, 0 replies; 30+ messages in thread
From: Alexander Samad @ 2006-01-20  4:27 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel, Alan Cox

[-- Attachment #1: Type: text/plain, Size: 883 bytes --]

On Fri, Jan 20, 2006 at 12:09:14PM +0800, Max Waterman wrote:
> Alan Cox wrote:
> >On Iau, 2006-01-19 at 21:14 +0800, Max Waterman wrote:
> >>So, if I have my raid controller set to use write-back, it *is* caching 
> >>the writes, and so this *is* a bad thing, right?
> >
> >Depends on your raid controller. If it is battery backed it may well all
> >be fine.
> 
> Eh? Why?
> 
> I'm not sure what difference it makes if the controller is battery 
> backed or not; if the drives are gone, then the card has nothing to 
> write to...will it make the writes when the power comes back on?
some do
> 
> Max.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-19 14:08         ` Alan Cox
@ 2006-01-20  4:09           ` Max Waterman
  2006-01-20  4:27             ` Alexander Samad
  2006-01-20 12:52             ` Alan Cox
  0 siblings, 2 replies; 30+ messages in thread
From: Max Waterman @ 2006-01-20  4:09 UTC (permalink / raw)
  To: linux-kernel; +Cc: Alan Cox

Alan Cox wrote:
> On Iau, 2006-01-19 at 21:14 +0800, Max Waterman wrote:
>> So, if I have my raid controller set to use write-back, it *is* caching 
>> the writes, and so this *is* a bad thing, right?
> 
> Depends on your raid controller. If it is battery backed it may well all
> be fine.

Eh? Why?

I'm not sure what difference it makes if the controller is battery 
backed or not; if the drives are gone, then the card has nothing to 
write to...will it make the writes when the power comes back on?

Max.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-19 13:14       ` Max Waterman
@ 2006-01-19 14:08         ` Alan Cox
  2006-01-20  4:09           ` Max Waterman
  0 siblings, 1 reply; 30+ messages in thread
From: Alan Cox @ 2006-01-19 14:08 UTC (permalink / raw)
  To: Max Waterman; +Cc: Robert Hancock, linux-kernel

On Iau, 2006-01-19 at 21:14 +0800, Max Waterman wrote:
> So, if I have my raid controller set to use write-back, it *is* caching 
> the writes, and so this *is* a bad thing, right?

Depends on your raid controller. If it is battery backed it may well all
be fine.

Alan

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
  2006-01-19  1:58     ` Robert Hancock
@ 2006-01-19 13:14       ` Max Waterman
  2006-01-19 14:08         ` Alan Cox
  0 siblings, 1 reply; 30+ messages in thread
From: Max Waterman @ 2006-01-19 13:14 UTC (permalink / raw)
  To: Robert Hancock; +Cc: linux-kernel

Robert Hancock wrote:
> Jeff V. Merkey wrote:
>> Max Waterman wrote:
>>
>>> One further question. I get these messages 'in' dmesg :
>>>
>>> sda: asking for cache data failed
>>> sda: assuming drive cache: write through
>>>
>>> How can I force it to be 'write back'?
>>
>> Forcing write back is a very bad idea unless you have a battery backed 
>> up RAID controller.  
> 
> This is not what these messages are referring to. Those write through 
> vs. write back messages are referring to detecting the drive write cache 
> mode, not setting it. Whether or not the write cache is enabled is used 
> to determine whether the sd driver uses SYNCHRONIZE CACHE commands to 
> flush the write cache on the device. If the drive says its write cache 
> is off or doesn't support determining the cache status, the kernel will 
> not issue SYNCHRONIZE CACHE commands. This may be a bad thing if the 
> device is really using write caching..
>     

So, if I have my raid controller set to use write-back, it *is* caching 
the writes, and so this *is* a bad thing, right?

If so, how to fix?

Max.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
@ 2006-01-19 11:39 Al Boldi
  0 siblings, 0 replies; 30+ messages in thread
From: Al Boldi @ 2006-01-19 11:39 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-raid

Jeff V. Merkey wrote:
> Jens Axboe wrote:
> >On Mon, Jan 16 2006, Jeff V. Merkey wrote:
> >>Max Waterman wrote:
> >>>I've noticed that I consistently get better (read) numbers from kernel 
> >>>2.6.8 than from later kernels.
> >>
> >>To open the bottlenecks, the following works well.  Jens will shoot me
> >>-#define BLKDEV_MIN_RQ        4
> >>-#define BLKDEV_MAX_RQ        128     /* Default maximum */
> >>+#define BLKDEV_MIN_RQ        4096
> >>+#define BLKDEV_MAX_RQ        8192    /* Default maximum */
> >
> >Yeah I could shoot you. However I'm more interested in why this is
> >necessary, eg I'd like to see some numbers from you comparing:
> >
> >- Doing
> >        # echo 8192 > /sys/block/<dev>/queue/nr_requests
> >  for each drive you are accessing.
> >
> >The BLKDEV_MIN_RQ increase is just silly and wastes a huge amount of
> >memory for no good reason.
>
> Yep. I build it into the kernel to save the trouble of sending it to proc.
> Jens recommendation will work just fine. It has the same affect of
> increasing the max requests outstanding.

Your suggestion doesn't do anything here on 2.6.15, but
	echo 192 > /sys/block/<dev>/queue/max_sectors_kb 
	echo 192 > /sys/block/<dev>/queue/read_ahead_kb 
works wonders!

I don't know why, but anything less than 64 and more than 256 makes the queue 
collapse miserably, causing some strange __copy_to_user calls?!?!?

Also, it seems that changing the kernel HZ has some drastic effects on the 
queues.  A simple lilo gets delayed 400% and 200% using 100HZ and 250HZ 
respectively.

--
Al


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: io performance...
       [not found]   ` <5wdKh-5wF-15@gated-at.bofh.it>
@ 2006-01-19  1:58     ` Robert Hancock
  2006-01-19 13:14       ` Max Waterman
  0 siblings, 1 reply; 30+ messages in thread
From: Robert Hancock @ 2006-01-19  1:58 UTC (permalink / raw)
  To: linux-kernel

Jeff V. Merkey wrote:
> Max Waterman wrote:
> 
>> One further question. I get these messages 'in' dmesg :
>>
>> sda: asking for cache data failed
>> sda: assuming drive cache: write through
>>
>> How can I force it to be 'write back'?
> 
> Forcing write back is a very bad idea unless you have a battery backed 
> up RAID controller.  

This is not what these messages are referring to. Those write through 
vs. write back messages are referring to detecting the drive write cache 
mode, not setting it. Whether or not the write cache is enabled is used 
to determine whether the sd driver uses SYNCHRONIZE CACHE commands to 
flush the write cache on the device. If the drive says its write cache 
is off or doesn't support determining the cache status, the kernel will 
not issue SYNCHRONIZE CACHE commands. This may be a bad thing if the 
device is really using write caching..
	
-- 
Robert Hancock      Saskatoon, SK, Canada
To email, remove "nospam" from hancockr@nospamshaw.ca
Home Page: http://www.roberthancock.com/


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2006-01-25 14:19 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-01-16  7:35 io performance Max Waterman
2006-01-16  7:32 ` Jeff V. Merkey
2006-01-17 13:57   ` Jens Axboe
2006-01-17 19:17     ` Jeff V. Merkey
2006-01-16  8:35 ` Pekka Enberg
2006-01-17 17:06 ` Phillip Susi
2006-01-18  7:24   ` Max Waterman
2006-01-18 15:19     ` Phillip Susi
2006-01-20  5:58       ` Max Waterman
2006-01-20 13:42         ` Ian Soboroff
2006-01-25  6:36           ` Max Waterman
2006-01-25 14:19             ` Ian Soboroff
2006-01-25 13:09           ` Bernd Eckenfels
2006-01-18  3:02 ` Max Waterman
2006-01-18  4:30   ` Jeff V. Merkey
2006-01-18  5:09     ` Max Waterman
2006-01-18  4:37       ` Jeff V. Merkey
2006-01-18  7:06         ` Max Waterman
2006-01-18  9:21     ` Alan Cox
2006-01-18 15:48       ` Phillip Susi
2006-01-18 16:25         ` Bartlomiej Zolnierkiewicz
2006-01-19  0:48 ` Adrian Bunk
2006-01-19 13:18   ` Max Waterman
     [not found] <5vx8f-1Al-21@gated-at.bofh.it>
     [not found] ` <5wbRY-3cF-3@gated-at.bofh.it>
     [not found]   ` <5wdKh-5wF-15@gated-at.bofh.it>
2006-01-19  1:58     ` Robert Hancock
2006-01-19 13:14       ` Max Waterman
2006-01-19 14:08         ` Alan Cox
2006-01-20  4:09           ` Max Waterman
2006-01-20  4:27             ` Alexander Samad
2006-01-20 12:52             ` Alan Cox
2006-01-19 11:39 Al Boldi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).