All of lore.kernel.org
 help / color / mirror / Atom feed
* unbelievably bad performance: 2.6.27.37 and raid6
@ 2009-10-31 15:55 Jon Nelson
  2009-10-31 18:43 ` Thomas Fjellstrom
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: Jon Nelson @ 2009-10-31 15:55 UTC (permalink / raw)
  To: LinuxRaid

I have a 4 disk raid6. The disks are individually capable of (at
least) 75MB/s on average.
The raid6 looks like this:

md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]

The raid serves basically as an lvm physical volume.

While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
am observing speeds in the 10-15MB/s range.
That seems really really slow.

Using vmstat, I see similar numbers (I'm averaging a bit, I'll see
lows of 6MB/s and highs of 18-20MB/s, but these are infrequent.)
The system is, for the most part, otherwise unloaded.

I looked at stripe_cache_size and increased it to 384 - no difference.
blockdev --getra reports 256 for all involved raid components.
I'm using the deadline I/O scheduler.

Am I crazy?  Is 12.5MB/s (average) what I should expect, here?  What
might I look at here?

-- 
Jon

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-10-31 15:55 unbelievably bad performance: 2.6.27.37 and raid6 Jon Nelson
@ 2009-10-31 18:43 ` Thomas Fjellstrom
  2009-11-01 19:37   ` Andrew Dunn
  2009-10-31 19:59 ` Christian Pernegger
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: Thomas Fjellstrom @ 2009-10-31 18:43 UTC (permalink / raw)
  To: Jon Nelson; +Cc: LinuxRaid

On Sat October 31 2009, Jon Nelson wrote:
> I have a 4 disk raid6. The disks are individually capable of (at
> least) 75MB/s on average.
> The raid6 looks like this:
> 
> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>       613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
>  [UUUU]
> 
> The raid serves basically as an lvm physical volume.
> 
> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
> am observing speeds in the 10-15MB/s range.
> That seems really really slow.
> 
> Using vmstat, I see similar numbers (I'm averaging a bit, I'll see
> lows of 6MB/s and highs of 18-20MB/s, but these are infrequent.)
> The system is, for the most part, otherwise unloaded.
> 
> I looked at stripe_cache_size and increased it to 384 - no difference.
> blockdev --getra reports 256 for all involved raid components.
> I'm using the deadline I/O scheduler.
> 
> Am I crazy?  Is 12.5MB/s (average) what I should expect, here?  What
> might I look at here?
> 

I can't say I see numbers that bad.. But I do get 1/3 or less of the 
performance with .29, .30, .31, and .32 than I get with .26. I haven't tried 
any other kernels as these are the only ones I've been able to grab from apt 
;)

I get something on the order of 100MB/s write and read with newer kernels, 
with really bursty behaviour, and with .26, its not as fast as it COULD be, 
but at least I get 200-300MB/s, which is reasonable.

Now if your two file systems are on the same LVM VG, that could have an 
impact on performance.

-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-10-31 15:55 unbelievably bad performance: 2.6.27.37 and raid6 Jon Nelson
  2009-10-31 18:43 ` Thomas Fjellstrom
@ 2009-10-31 19:59 ` Christian Pernegger
  2009-11-02 19:39   ` Jon Nelson
  2009-11-01  7:17 ` Kristleifur Daðason
  2009-11-02 14:54 ` Bill Davidsen
  3 siblings, 1 reply; 26+ messages in thread
From: Christian Pernegger @ 2009-10-31 19:59 UTC (permalink / raw)
  To: Jon Nelson; +Cc: LinuxRaid

> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]

Why would you use a 4 disk raid6? If 50% of raw capacity is enough
just go with raid10.
The default 64KiB chunk size has always been very slow for me, try
something larger, perhaps even 1MiB.

> While rsyncing a file from an ext3 filesystem to a jfs filesystem,

rsync isn't really a good diagnostic, how's performance with something
simple, dd, bonnie++, whatever?

> I looked at stripe_cache_size and increased it to 384

Much too low, I usually set it to 8192. Take care you don't run out of
RAM, though, it's in pages/disk (16KiB in your case).

> blockdev --getra reports 256 for all involved raid components.

In my experience only the top component of a layered block device
matters at all, that would be the LV. 256 sectors / 128 KiB seems
awfully low, try something higher, 2MiB at least.

> Am I crazy?  Is 12.5MB/s (average) what I should expect, here?

No, it's probably just that the default tunining options are not very good.

Cheers,

C.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-10-31 15:55 unbelievably bad performance: 2.6.27.37 and raid6 Jon Nelson
  2009-10-31 18:43 ` Thomas Fjellstrom
  2009-10-31 19:59 ` Christian Pernegger
@ 2009-11-01  7:17 ` Kristleifur Daðason
  2009-11-02 14:54 ` Bill Davidsen
  3 siblings, 0 replies; 26+ messages in thread
From: Kristleifur Daðason @ 2009-11-01  7:17 UTC (permalink / raw)
  To: linux-raid

On Sat, Oct 31, 2009 at 11:55 PM, Jon Nelson
<jnelson-linux-raid@jamponi.net> wrote:
>
> I have a 4 disk raid6. The disks are individually capable of (at
> least) 75MB/s on average.
> [...]
>
> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
> am observing speeds in the 10-15MB/s range.
> That seems really really slow.

Hi,
Is the system unresponsive and laggy while you're doing this copy?

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-10-31 18:43 ` Thomas Fjellstrom
@ 2009-11-01 19:37   ` Andrew Dunn
  2009-11-01 19:41     ` Thomas Fjellstrom
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Dunn @ 2009-11-01 19:37 UTC (permalink / raw)
  To: tfjellstrom; +Cc: Jon Nelson, LinuxRaid, pernegger

Are we to expect some resolution in newer kernels?

I am going to rebuild my array (backup data and re-create) to modify the
chunk size this week. I hope to get a much higher performance when
increasing from 64k chunk size to 1024k.

Is there a way to modify chunk size in place or does the array need to
be re-created?

Thomas Fjellstrom wrote:
> On Sat October 31 2009, Jon Nelson wrote:
>   
>> I have a 4 disk raid6. The disks are individually capable of (at
>> least) 75MB/s on average.
>> The raid6 looks like this:
>>
>> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>>       613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
>>  [UUUU]
>>
>> The raid serves basically as an lvm physical volume.
>>
>> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
>> am observing speeds in the 10-15MB/s range.
>> That seems really really slow.
>>
>> Using vmstat, I see similar numbers (I'm averaging a bit, I'll see
>> lows of 6MB/s and highs of 18-20MB/s, but these are infrequent.)
>> The system is, for the most part, otherwise unloaded.
>>
>> I looked at stripe_cache_size and increased it to 384 - no difference.
>> blockdev --getra reports 256 for all involved raid components.
>> I'm using the deadline I/O scheduler.
>>
>> Am I crazy?  Is 12.5MB/s (average) what I should expect, here?  What
>> might I look at here?
>>
>>     
>
> I can't say I see numbers that bad.. But I do get 1/3 or less of the 
> performance with .29, .30, .31, and .32 than I get with .26. I haven't tried 
> any other kernels as these are the only ones I've been able to grab from apt 
> ;)
>
> I get something on the order of 100MB/s write and read with newer kernels, 
> with really bursty behaviour, and with .26, its not as fast as it COULD be, 
> but at least I get 200-300MB/s, which is reasonable.
>
> Now if your two file systems are on the same LVM VG, that could have an 
> impact on performance.
>
>   

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 19:37   ` Andrew Dunn
@ 2009-11-01 19:41     ` Thomas Fjellstrom
  2009-11-01 23:43       ` NeilBrown
  0 siblings, 1 reply; 26+ messages in thread
From: Thomas Fjellstrom @ 2009-11-01 19:41 UTC (permalink / raw)
  To: Andrew Dunn; +Cc: Jon Nelson, LinuxRaid, pernegger

On Sun November 1 2009, Andrew Dunn wrote:
> Are we to expect some resolution in newer kernels?

I assume all of the new per-bdi-writeback work going on in .33+ will have a 
large impact. At least I'm hoping.

> I am going to rebuild my array (backup data and re-create) to modify the
> chunk size this week. I hope to get a much higher performance when
> increasing from 64k chunk size to 1024k.
> 
> Is there a way to modify chunk size in place or does the array need to
> be re-created?

This I'm not sure about. I'd like to be able to reshape to a new chunk size 
for testing.

> Thomas Fjellstrom wrote:
> > On Sat October 31 2009, Jon Nelson wrote:
> >> I have a 4 disk raid6. The disks are individually capable of (at
> >> least) 75MB/s on average.
> >> The raid6 looks like this:
> >>
> >> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
> >>       613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
> >>  [UUUU]
> >>
> >> The raid serves basically as an lvm physical volume.
> >>
> >> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
> >> am observing speeds in the 10-15MB/s range.
> >> That seems really really slow.
> >>
> >> Using vmstat, I see similar numbers (I'm averaging a bit, I'll see
> >> lows of 6MB/s and highs of 18-20MB/s, but these are infrequent.)
> >> The system is, for the most part, otherwise unloaded.
> >>
> >> I looked at stripe_cache_size and increased it to 384 - no difference.
> >> blockdev --getra reports 256 for all involved raid components.
> >> I'm using the deadline I/O scheduler.
> >>
> >> Am I crazy?  Is 12.5MB/s (average) what I should expect, here?  What
> >> might I look at here?
> >
> > I can't say I see numbers that bad.. But I do get 1/3 or less of the
> > performance with .29, .30, .31, and .32 than I get with .26. I haven't
> > tried any other kernels as these are the only ones I've been able to
> > grab from apt ;)
> >
> > I get something on the order of 100MB/s write and read with newer
> > kernels, with really bursty behaviour, and with .26, its not as fast as
> > it COULD be, but at least I get 200-300MB/s, which is reasonable.
> >
> > Now if your two file systems are on the same LVM VG, that could have an
> > impact on performance.
> 


-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 19:41     ` Thomas Fjellstrom
@ 2009-11-01 23:43       ` NeilBrown
  2009-11-01 23:47         ` Thomas Fjellstrom
  0 siblings, 1 reply; 26+ messages in thread
From: NeilBrown @ 2009-11-01 23:43 UTC (permalink / raw)
  To: tfjellstrom; +Cc: Andrew Dunn, Jon Nelson, LinuxRaid, pernegger

On Mon, November 2, 2009 6:41 am, Thomas Fjellstrom wrote:
> On Sun November 1 2009, Andrew Dunn wrote:
>> Are we to expect some resolution in newer kernels?
>
> I assume all of the new per-bdi-writeback work going on in .33+ will have
> a
> large impact. At least I'm hoping.
>
>> I am going to rebuild my array (backup data and re-create) to modify the
>> chunk size this week. I hope to get a much higher performance when
>> increasing from 64k chunk size to 1024k.
>>
>> Is there a way to modify chunk size in place or does the array need to
>> be re-created?
>
> This I'm not sure about. I'd like to be able to reshape to a new chunk
> size
> for testing.

Reshaping to a new chunksize is possible with the latest mdadm and kernel,
but I would recommend waiting for mdadm-3.1.1 and 2.6.32.
With the current code, a device failure during reshape followed by an
unclean shutdown while reshape is happening can lead to unrecoverable
data loss.  Even a clean shutdown before the shape finishes in that case
might be a problem.

NeilBrown


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 23:43       ` NeilBrown
@ 2009-11-01 23:47         ` Thomas Fjellstrom
  2009-11-01 23:53           ` Jon Nelson
                             ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Thomas Fjellstrom @ 2009-11-01 23:47 UTC (permalink / raw)
  To: NeilBrown; +Cc: Andrew Dunn, Jon Nelson, LinuxRaid, pernegger

On Sun November 1 2009, NeilBrown wrote:
> On Mon, November 2, 2009 6:41 am, Thomas Fjellstrom wrote:
> > On Sun November 1 2009, Andrew Dunn wrote:
> >> Are we to expect some resolution in newer kernels?
> >
> > I assume all of the new per-bdi-writeback work going on in .33+ will
> > have a
> > large impact. At least I'm hoping.
> >
> >> I am going to rebuild my array (backup data and re-create) to modify
> >> the chunk size this week. I hope to get a much higher performance when
> >> increasing from 64k chunk size to 1024k.
> >>
> >> Is there a way to modify chunk size in place or does the array need to
> >> be re-created?
> >
> > This I'm not sure about. I'd like to be able to reshape to a new chunk
> > size
> > for testing.
> 
> Reshaping to a new chunksize is possible with the latest mdadm and
>  kernel, but I would recommend waiting for mdadm-3.1.1 and 2.6.32.
> With the current code, a device failure during reshape followed by an
> unclean shutdown while reshape is happening can lead to unrecoverable
> data loss.  Even a clean shutdown before the shape finishes in that case
> might be a problem.

That's good to know. Though I'm stuck with 2.6.26 till the performance 
regressions in the io and scheduling subsystems are solved.

> NeilBrown
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 23:47         ` Thomas Fjellstrom
@ 2009-11-01 23:53           ` Jon Nelson
  2009-11-02  2:28             ` Neil Brown
  2009-11-01 23:55           ` Andrew Dunn
  2009-11-04 14:43           ` CoolCold
  2 siblings, 1 reply; 26+ messages in thread
From: Jon Nelson @ 2009-11-01 23:53 UTC (permalink / raw)
  Cc: LinuxRaid

On Sun, Nov 1, 2009 at 5:47 PM, Thomas Fjellstrom <tfjellstrom@shaw.ca> wrote:
> On Sun November 1 2009, NeilBrown wrote:

>> Reshaping to a new chunksize is possible with the latest mdadm and
>>  kernel, but I would recommend waiting for mdadm-3.1.1 and 2.6.32.
>> With the current code, a device failure during reshape followed by an
>> unclean shutdown while reshape is happening can lead to unrecoverable
>> data loss.  Even a clean shutdown before the shape finishes in that case
>> might be a problem.

Do you know if the stable series 2.6.31.XX incorporates the appropriate fixes?

-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 23:47         ` Thomas Fjellstrom
  2009-11-01 23:53           ` Jon Nelson
@ 2009-11-01 23:55           ` Andrew Dunn
  2009-11-04 14:43           ` CoolCold
  2 siblings, 0 replies; 26+ messages in thread
From: Andrew Dunn @ 2009-11-01 23:55 UTC (permalink / raw)
  To: tfjellstrom; +Cc: NeilBrown, Jon Nelson, LinuxRaid, pernegger

Thanks for the update Neil, good to have something to look forward to.

I am using Ubuntu 9.10, hopefully the new kernel will be incorporated
sometime in the near future. In the mean time I will back everything up
and create the ARRAY all over again.

Thomas Fjellstrom wrote:
> On Sun November 1 2009, NeilBrown wrote:
>   
>> On Mon, November 2, 2009 6:41 am, Thomas Fjellstrom wrote:
>>     
>>> On Sun November 1 2009, Andrew Dunn wrote:
>>>       
>>>> Are we to expect some resolution in newer kernels?
>>>>         
>>> I assume all of the new per-bdi-writeback work going on in .33+ will
>>> have a
>>> large impact. At least I'm hoping.
>>>
>>>       
>>>> I am going to rebuild my array (backup data and re-create) to modify
>>>> the chunk size this week. I hope to get a much higher performance when
>>>> increasing from 64k chunk size to 1024k.
>>>>
>>>> Is there a way to modify chunk size in place or does the array need to
>>>> be re-created?
>>>>         
>>> This I'm not sure about. I'd like to be able to reshape to a new chunk
>>> size
>>> for testing.
>>>       
>> Reshaping to a new chunksize is possible with the latest mdadm and
>>  kernel, but I would recommend waiting for mdadm-3.1.1 and 2.6.32.
>> With the current code, a device failure during reshape followed by an
>> unclean shutdown while reshape is happening can lead to unrecoverable
>> data loss.  Even a clean shutdown before the shape finishes in that case
>> might be a problem.
>>     
>
> That's good to know. Though I'm stuck with 2.6.26 till the performance 
> regressions in the io and scheduling subsystems are solved.
>
>   
>> NeilBrown
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>     
>
>
>   

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 23:53           ` Jon Nelson
@ 2009-11-02  2:28             ` Neil Brown
  0 siblings, 0 replies; 26+ messages in thread
From: Neil Brown @ 2009-11-02  2:28 UTC (permalink / raw)
  To: Jon Nelson; +Cc: LinuxRaid

On Sunday November 1, jnelson-linux-raid@jamponi.net wrote:
> On Sun, Nov 1, 2009 at 5:47 PM, Thomas Fjellstrom <tfjellstrom@shaw.ca> wrote:
> > On Sun November 1 2009, NeilBrown wrote:
> 
> >> Reshaping to a new chunksize is possible with the latest mdadm and
> >>  kernel, but I would recommend waiting for mdadm-3.1.1 and 2.6.32.
> >> With the current code, a device failure during reshape followed by an
> >> unclean shutdown while reshape is happening can lead to unrecoverable
> >> data loss.  Even a clean shutdown before the shape finishes in that case
> >> might be a problem.
> 
> Do you know if the stable series 2.6.31.XX incorporates the appropriate fixes?

They haven't been written yet.....
The kernel change is very small I think so it will go in to -stable.
But I want to write the mdadm changes first (which are bigger) and be
sure I don't need any other kernel changes.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-10-31 15:55 unbelievably bad performance: 2.6.27.37 and raid6 Jon Nelson
                   ` (2 preceding siblings ...)
  2009-11-01  7:17 ` Kristleifur Daðason
@ 2009-11-02 14:54 ` Bill Davidsen
  2009-11-02 15:03   ` Jon Nelson
  2009-11-02 18:51   ` Christian Pernegger
  3 siblings, 2 replies; 26+ messages in thread
From: Bill Davidsen @ 2009-11-02 14:54 UTC (permalink / raw)
  To: Jon Nelson; +Cc: LinuxRaid

Jon Nelson wrote:
> I have a 4 disk raid6. The disks are individually capable of (at
> least) 75MB/s on average.
> The raid6 looks like this:
>
> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>       613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
>
> The raid serves basically as an lvm physical volume.
>
> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
> am observing speeds in the 10-15MB/s range.
> That seems really really slow.
>
>   
It is really slow, recent kernels seem to be unsuitable for use as large 
file servers, as the performance is, as you described it, "unbelievably 
bad."
> Using vmstat, I see similar numbers (I'm averaging a bit, I'll see
> lows of 6MB/s and highs of 18-20MB/s, but these are infrequent.)
> The system is, for the most part, otherwise unloaded.
>
> I looked at stripe_cache_size and increased it to 384 - no difference.
> blockdev --getra reports 256 for all involved raid components.
> I'm using the deadline I/O scheduler.
>
>   
Push is to 8192 or so (assuming enough memory), but pretty much minimal 
improvement. I don't know what problems were solved in recent kernels, 
but they are simply not worth the 300% loss of performance. Linux has 
been getting slower over time as features were added, and faster 
hardware has overcome the issues, but this one needs SSD to make the 
server useful, and I can't afford it.

> Am I crazy?  Is 12.5MB/s (average) what I should expect, here?  What
> might I look at here?
>
>   


-- 
Bill Davidsen <davidsen@tmr.com>
  Unintended results are the well-earned reward for incompetence.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-02 14:54 ` Bill Davidsen
@ 2009-11-02 15:03   ` Jon Nelson
  2009-11-03  5:36     ` NeilBrown
  2009-11-02 18:51   ` Christian Pernegger
  1 sibling, 1 reply; 26+ messages in thread
From: Jon Nelson @ 2009-11-02 15:03 UTC (permalink / raw)
  Cc: LinuxRaid

On Mon, Nov 2, 2009 at 8:54 AM, Bill Davidsen <davidsen@tmr.com> wrote:
> Jon Nelson wrote:
>>
>> I have a 4 disk raid6. The disks are individually capable of (at
>> least) 75MB/s on average.
>> The raid6 looks like this:
>>
>> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>>      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
>> [UUUU]
>>
>> The raid serves basically as an lvm physical volume.
>>
>> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
>> am observing speeds in the 10-15MB/s range.
>> That seems really really slow.
>>
>>
>
> It is really slow, recent kernels seem to be unsuitable for use as large
> file servers, as the performance is, as you described it, "unbelievably
> bad."

Yeah. I'm hoping that the 2.6.31.XX stable kernel series gets some of
these improvements, the .27 series has been not the most stable for me
either.  2.6.27.25 was the last rock-solid of the .27 series for me.

-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-02 14:54 ` Bill Davidsen
  2009-11-02 15:03   ` Jon Nelson
@ 2009-11-02 18:51   ` Christian Pernegger
  1 sibling, 0 replies; 26+ messages in thread
From: Christian Pernegger @ 2009-11-02 18:51 UTC (permalink / raw)
  To: Linux RAID

> It is really slow, recent kernels seem to be unsuitable for use as large
> file servers, as the performance is, as you described it, "unbelievably
> bad."

Is this raid6 specific? Because FWIW md performance has been fine (not
great, but fine) on my little raid5/10 file servers. As soon as I add
dm-crypt to the mix it all goes to hell, though.

Maybe it's a device-mapper thing? Is everybody affected using LVM?

C.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-10-31 19:59 ` Christian Pernegger
@ 2009-11-02 19:39   ` Jon Nelson
  2009-11-02 20:01     ` Christian Pernegger
  0 siblings, 1 reply; 26+ messages in thread
From: Jon Nelson @ 2009-11-02 19:39 UTC (permalink / raw)
  Cc: LinuxRaid

On Sat, Oct 31, 2009 at 1:59 PM, Christian Pernegger
<pernegger@gmail.com> wrote:
>> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>>      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
>
> Why would you use a 4 disk raid6? If 50% of raw capacity is enough
> just go with raid10

With 4 disks, the ability to sustain *any two* devices going bad is a big bonus.
Using raid10 with two copies (1 original, 1 duplicate) on 4 disks
gives me 50% space but I can only sustain *1* failed device. I'm
guessing I'd have to go with raid10 with three copies (1 original, 2
duplicate) which is even worse (2/3 space lost). Did I just calculate
that all wrong?


-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-02 19:39   ` Jon Nelson
@ 2009-11-02 20:01     ` Christian Pernegger
  0 siblings, 0 replies; 26+ messages in thread
From: Christian Pernegger @ 2009-11-02 20:01 UTC (permalink / raw)
  To: Jon Nelson; +Cc: LinuxRaid

> With 4 disks, the ability to sustain *any two* devices going bad is a big bonus.
> Using raid10 with two copies (1 original, 1 duplicate) on 4 disks
> gives me 50% space but I can only sustain *1* failed device. I'm
> guessing I'd have to go with raid10 with three copies (1 original, 2
> duplicate) which is even worse (2/3 space lost). Did I just calculate
> that all wrong?

No. No, that's fine, I hadn't thought of the fact that raid6 survives
all 6 possible two disk failures while raid10 only survives 4.
Still, raid6 with 4 disks seems a bit pathological / corner-casey to
me and aside from that the md raid6 implementation is quite new. And
you'll need backups either way. Personally, I'd feel safer with
raid10. It's certainly faster.

Cheers,

C.

P.S.: About the only reason I can see to go with 4 disk raid6 is a
planned capacity expansion in the near future.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-02 15:03   ` Jon Nelson
@ 2009-11-03  5:36     ` NeilBrown
  2009-11-03  6:09       ` Michael Evans
  0 siblings, 1 reply; 26+ messages in thread
From: NeilBrown @ 2009-11-03  5:36 UTC (permalink / raw)
  To: Jon Nelson; +Cc: LinuxRaid

On Tue, November 3, 2009 2:03 am, Jon Nelson wrote:
> On Mon, Nov 2, 2009 at 8:54 AM, Bill Davidsen <davidsen@tmr.com> wrote:
>> Jon Nelson wrote:
>>>
>>> I have a 4 disk raid6. The disks are individually capable of (at
>>> least) 75MB/s on average.
>>> The raid6 looks like this:
>>>
>>> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>>>      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
>>> [UUUU]
>>>
>>> The raid serves basically as an lvm physical volume.
>>>
>>> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
>>> am observing speeds in the 10-15MB/s range.
>>> That seems really really slow.
>>>
>>>
>>
>> It is really slow, recent kernels seem to be unsuitable for use as large
>> file servers, as the performance is, as you described it, "unbelievably
>> bad."
>
> Yeah. I'm hoping that the 2.6.31.XX stable kernel series gets some of
> these improvements, the .27 series has been not the most stable for me
> either.  2.6.27.25 was the last rock-solid of the .27 series for me.

I wouldn't get your hopes up...
I did some limited testing of simple writes to ext2 and the current
32-pre kernel is noticably slower than .26 .27 .28 .29 .. (that is as
far as I got with testing... I should write a script and leave it running
overnight to get a broader picture).

NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03  5:36     ` NeilBrown
@ 2009-11-03  6:09       ` Michael Evans
  2009-11-03  6:28         ` NeilBrown
  0 siblings, 1 reply; 26+ messages in thread
From: Michael Evans @ 2009-11-03  6:09 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jon Nelson, LinuxRaid

On Mon, Nov 2, 2009 at 9:36 PM, NeilBrown <neilb@suse.de> wrote:
> On Tue, November 3, 2009 2:03 am, Jon Nelson wrote:
>> On Mon, Nov 2, 2009 at 8:54 AM, Bill Davidsen <davidsen@tmr.com> wrote:
>>> Jon Nelson wrote:
>>>>
>>>> I have a 4 disk raid6. The disks are individually capable of (at
>>>> least) 75MB/s on average.
>>>> The raid6 looks like this:
>>>>
>>>> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>>>>      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
>>>> [UUUU]
>>>>
>>>> The raid serves basically as an lvm physical volume.
>>>>
>>>> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
>>>> am observing speeds in the 10-15MB/s range.
>>>> That seems really really slow.
>>>>
>>>>
>>>
>>> It is really slow, recent kernels seem to be unsuitable for use as large
>>> file servers, as the performance is, as you described it, "unbelievably
>>> bad."
>>
>> Yeah. I'm hoping that the 2.6.31.XX stable kernel series gets some of
>> these improvements, the .27 series has been not the most stable for me
>> either.  2.6.27.25 was the last rock-solid of the .27 series for me.
>
> I wouldn't get your hopes up...
> I did some limited testing of simple writes to ext2 and the current
> 32-pre kernel is noticably slower than .26 .27 .28 .29 .. (that is as
> far as I got with testing... I should write a script and leave it running
> overnight to get a broader picture).
>
> NeilBrown
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Maybe my speed results even after the 'fix' are also this issue:  I
expect each of my drives is capable of at least 8MB/sec sustained
(highly pessimistic).

     2909829120 blocks super 1.1 level 6, 128k chunk, algorithm 18
[8/8] [UUUUUUUU]
     [==>..................]  reshape = 10.4% (50708096/484971520)
finish=3989.0min speed=1813K/sec

The 'backup file' is on a separate raid 1 device and approximately 25
mb in size.  My cpu has virtually no load and I've got gigs of memory
free.

(also sorry for duplicates, I hit reply at the top instead of reply to
all at the bottom out of habit)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03  6:09       ` Michael Evans
@ 2009-11-03  6:28         ` NeilBrown
  2009-11-03  6:39           ` Michael Evans
                             ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: NeilBrown @ 2009-11-03  6:28 UTC (permalink / raw)
  To: Michael Evans; +Cc: Jon Nelson, LinuxRaid

On Tue, November 3, 2009 5:09 pm, Michael Evans wrote:
> On Mon, Nov 2, 2009 at 9:36 PM, NeilBrown <neilb@suse.de> wrote:
>> On Tue, November 3, 2009 2:03 am, Jon Nelson wrote:
>>> On Mon, Nov 2, 2009 at 8:54 AM, Bill Davidsen <davidsen@tmr.com> wrote:
>>>> Jon Nelson wrote:
>>>>>
>>>>> I have a 4 disk raid6. The disks are individually capable of (at
>>>>> least) 75MB/s on average.
>>>>> The raid6 looks like this:
>>>>>
>>>>> md0 : active raid6 sda4[0] sdc4[5] sdd4[4] sdb4[6]
>>>>>      613409536 blocks super 1.1 level 6, 64k chunk, algorithm 2 [4/4]
>>>>> [UUUU]
>>>>>
>>>>> The raid serves basically as an lvm physical volume.
>>>>>
>>>>> While rsyncing a file from an ext3 filesystem to a jfs filesystem, I
>>>>> am observing speeds in the 10-15MB/s range.
>>>>> That seems really really slow.
>>>>>
>>>>>
>>>>
>>>> It is really slow, recent kernels seem to be unsuitable for use as
>>>> large
>>>> file servers, as the performance is, as you described it,
>>>> "unbelievably
>>>> bad."
>>>
>>> Yeah. I'm hoping that the 2.6.31.XX stable kernel series gets some of
>>> these improvements, the .27 series has been not the most stable for me
>>> either.  2.6.27.25 was the last rock-solid of the .27 series for me.
>>
>> I wouldn't get your hopes up...
>> I did some limited testing of simple writes to ext2 and the current
>> 32-pre kernel is noticably slower than .26 .27 .28 .29 .. (that is as
>> far as I got with testing... I should write a script and leave it
>> running
>> overnight to get a broader picture).
>>
>> NeilBrown
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> Maybe my speed results even after the 'fix' are also this issue:  I
> expect each of my drives is capable of at least 8MB/sec sustained
> (highly pessimistic).
>
>      2909829120 blocks super 1.1 level 6, 128k chunk, algorithm 18
> [8/8] [UUUUUUUU]
>      [==>..................]  reshape = 10.4% (50708096/484971520)
> finish=3989.0min speed=1813K/sec
>
> The 'backup file' is on a separate raid 1 device and approximately 25
> mb in size.  My cpu has virtually no load and I've got gigs of memory
> free.

A reshape is a fundamentally slow operation.  Each block needs to
be read and then written somewhere else so there is little opportunity
for streaming.
An in-place reshape (i.e the array doesn't get bigger or smaller) is
even slower as we have to take a backup copy of each range of blocks
before writing them back out.  This limits streaming even more.

It is possible to get it fast than it is by increasing the
array's stripe_cache_size and also increasing the 'backup' size
that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
However it will still be significantly slower than e.g. a resync.

So reshape will always be slow.  It is a completely different issue
to filesystem activity on a RAID array being slow.  Recent reports of
slowness are, I think, not directly related to md/raid.  It is either
the filesystem or the VM or a combination of the two that causes
these slowdowns.


NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03  6:28         ` NeilBrown
@ 2009-11-03  6:39           ` Michael Evans
  2009-11-03  6:46           ` Michael Evans
  2009-11-03 13:07           ` Goswin von Brederlow
  2 siblings, 0 replies; 26+ messages in thread
From: Michael Evans @ 2009-11-03  6:39 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jon Nelson, LinuxRaid

>
> A reshape is a fundamentally slow operation.  Each block needs to
> be read and then written somewhere else so there is little opportunity
> for streaming.
> An in-place reshape (i.e the array doesn't get bigger or smaller) is
> even slower as we have to take a backup copy of each range of blocks
> before writing them back out.  This limits streaming even more.
>
> It is possible to get it fast than it is by increasing the
> array's stripe_cache_size and also increasing the 'backup' size
> that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
> However it will still be significantly slower than e.g. a resync.
>
> So reshape will always be slow.  It is a completely different issue
> to filesystem activity on a RAID array being slow.  Recent reports of
> slowness are, I think, not directly related to md/raid.  It is either
> the filesystem or the VM or a combination of the two that causes
> these slowdowns.
>
>
> NeilBrown
>

stripe_cache_active = default 0?
stripe_cache_size = default 256? (1mb per disk)
cat /sys/block/md52/md/stripe_cache_*
0
8192

I bumped it up, but I haven't seen a real increase in the speed yet.
Do I need to mdadm -S /dev/md52 and then re-assemble it to have this
take effect, or is it the fact that the stripe cache active seems to
be 'deactivated'?  Can I safely set this to 1 mid-reshape?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03  6:28         ` NeilBrown
  2009-11-03  6:39           ` Michael Evans
@ 2009-11-03  6:46           ` Michael Evans
  2009-11-03  9:16             ` NeilBrown
  2009-11-03 13:07           ` Goswin von Brederlow
  2 siblings, 1 reply; 26+ messages in thread
From: Michael Evans @ 2009-11-03  6:46 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jon Nelson, LinuxRaid

> It is possible to get it fast than it is by increasing the
> array's stripe_cache_size and also increasing the 'backup' size
> that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
> However it will still be significantly slower than e.g. a resync.
>
> NeilBrown
>

Sorry, one more Q: I was able to find strip_cache_size, and already
asked about stripe_cache_active (and if setting it to 1 was safe); but
I can't seem to even find the 'backup' size variable mentioned
anywhere.  I'm going to start looking at the code next, but is it
supposed to be in documentation?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03  6:46           ` Michael Evans
@ 2009-11-03  9:16             ` NeilBrown
  0 siblings, 0 replies; 26+ messages in thread
From: NeilBrown @ 2009-11-03  9:16 UTC (permalink / raw)
  To: Michael Evans; +Cc: Jon Nelson, LinuxRaid

On Tue, November 3, 2009 5:46 pm, Michael Evans wrote:
>> It is possible to get it fast than it is by increasing the
>> array's stripe_cache_size and also increasing the 'backup' size
>> that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
>> However it will still be significantly slower than e.g. a resync.
>>
>> NeilBrown
>>
>
> Sorry, one more Q: I was able to find strip_cache_size, and already
> asked about stripe_cache_active (and if setting it to 1 was safe); but
> I can't seem to even find the 'backup' size variable mentioned
> anywhere.  I'm going to start looking at the code next, but is it
> supposed to be in documentation?

It is in the code in mdadm, in 'Grow.c'.
It is called 'blocks' I think.  I'm not sure off hand what the units are.

NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03  6:28         ` NeilBrown
  2009-11-03  6:39           ` Michael Evans
  2009-11-03  6:46           ` Michael Evans
@ 2009-11-03 13:07           ` Goswin von Brederlow
  2009-11-03 16:28             ` Michael Evans
  2 siblings, 1 reply; 26+ messages in thread
From: Goswin von Brederlow @ 2009-11-03 13:07 UTC (permalink / raw)
  To: NeilBrown; +Cc: Michael Evans, Jon Nelson, LinuxRaid

"NeilBrown" <neilb@suse.de> writes:

> A reshape is a fundamentally slow operation.  Each block needs to
> be read and then written somewhere else so there is little opportunity
> for streaming.
> An in-place reshape (i.e the array doesn't get bigger or smaller) is
> even slower as we have to take a backup copy of each range of blocks
> before writing them back out.  This limits streaming even more.
>
> It is possible to get it fast than it is by increasing the
> array's stripe_cache_size and also increasing the 'backup' size
> that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
> However it will still be significantly slower than e.g. a resync.
>
> So reshape will always be slow.  It is a completely different issue
> to filesystem activity on a RAID array being slow.  Recent reports of
> slowness are, I think, not directly related to md/raid.  It is either
> the filesystem or the VM or a combination of the two that causes
> these slowdowns.
>
>
> NeilBrown

Now why is that? Lets leave out the case of an in-place
reshape. Nothing can be done to avoid making a backup of blocks
there, which severly limits the speed.

But the most common case should be growing an array. Lets look at the
first few steps or 3->4 disk raid5 reshape. Each step denotes a point
where a sync is required:

Step 0         Step 1         Step 2         Step 3         Step 4
 A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D
00 01  p  x    00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p
02  p 03  x    02  p 03  x    03 04  p 05    03 04  p 05    03 04  p 05
 p 04 05  x     p 04 05  x     x  x  x  x    06  p 07 08    06  p 07 08
06 07  p  x    06 07  p  x    06 07  p  x     x  x  x  x     p 09 10 11
08  p 09  x    08  p 09  x    08  p 09  x    08  p 09  x     x  x  x  x
 p 10 11  x     p 10 11  x     p 10 11  x     p 10 11  x     x  x  x  x
12 13  p  x    12 13  p  x    12 13  p  x    12 13  p  x    12 13  p  x
14  p 15  x    14  p 15  x    14  p 15  x    14  p 15  x    14  p 15  x
 p 16 17  x     p 16 17  x     p 16 17  x     p 16 17  x     p 16 17  x
18 19  p  x    18 19  p  x    18 19  p  x    18 19  p  x    18 19  p  x
20  p 21  x    20  p 21  x    20  p 21  x    20  p 21  x    20  p 21  x
 p 22 23  x     p 22 23  x     p 22 23  x     p 22 23  x     p 22 23  x
24 25  p  x    24 25  p  x    24 25  p  x    24 25  p  x    24 25  p  x
26  p 27  x    26  p 27  x    26  p 27  x    26  p 27  x    26  p 27  x
 p 28 29  x     p 28 29  x     p 28 29  x     p 28 29  x     p 28 29  x

Step 5         Step 6         Step 7         Step 8         Step 9
 A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D
00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p
03 04  p 05    03 04  p 05    03 04  p 05    03 04  p 05    03 04  p 05
06  p 07 08    06  p 07 08    06  p 07 08    06  p 07 08    06  p 07 08
 p 09 10 11     p 09 10 11     p 09 10 11     p 09 10 11     p 09 10 11
 x  x  x  x    12 13 14  p    12 13 14  p    12 13 14  p    12 13 14  p
 x  x  x  x    15 16  p 17    15 16  p 17    15 16  p 17    15 16  p 17
12 13  p  x     x  x  x  x    18  p 19 20    18  p 19 20    18  p 19 20
14  p 15  x     x  x  x  x     p 21 22 23     p 21 22 23     p 21 22 23
 p 16 17  x     x  x  x  x    24 25 26  p    24 25 26  p    24 25 26  p
18 19  p  x    18 19  p  x     x  x  x  x    27 28  p 29    27 28  p 29
20  p 21  x    20  p 21  x     x  x  x  x    30  p 31 32    30  p 31 32
 p 22 23  x     p 22 23  x     x  x  x  x     p 33 34 35     p 33 34 35
24 25  p  x    24 25  p  x     x  x  x  x    36 37 38  p    36 37 38  p
26  p 27  x    26  p 27  x    26  p 27  x     x  x  x  x    39 40  p 41
 p 28 29  x     p 28 29  x     p 28 29  x     x  x  x  x    42  p 43 44


In Step 0 and Step 1 the source and destination stripes overlap so a
backup is required. But at Step 2 you have a full stripe to work with
safely, at Step 4 2 stripes are save, Step 6 3 stripes and Step 7 4
stripes. As you go the safe region gets larger and larger requiring
less and less sync points.

Idealy the raid reshape should read as much data from the source
stripes as possible in one go and then write it all out in one
go. Then rince and repeat. For a simple implementation why not do
this:

1) read reshape-sync-size from proc/sys, default to 10% ram size
2) sync-size = min(reshape-sync-size, size of safe region)
3) setup internal mirror between old (read-write) and new stripes (write only)
4) read source blocks into stripe cache
5) compute new parity
6) put stripe into write cache
7) goto 3 until sync-size is reached
8) sync blocks to disk
9) record progress and remove internal mirror
10) goto 1

Optionally in 9 you can skip recording the progress if the safe region
is big enough for another read/write pass.

The important idea behind this would be that, given enough free ram,
there is a large linear read and large linear write alternating. Also,
since the normal cache is used instead of the static stripe cache, if
there is not enough ram then writes will be flushed out prematurely.
This will lead to a degradation of performance but that is better than
running out of memory.

I have 4GB on my desktop with at least 3GB free if I'm not doing
anything expensive. A raid-reshape should be able to do 3GB linear
read and write alternatively. But I would already be happy if it would
do 256MB. There is lots of opportunity for streaming. It might justbe
hard to get the kernel IO system to cooperate.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03 13:07           ` Goswin von Brederlow
@ 2009-11-03 16:28             ` Michael Evans
  2009-11-03 19:26               ` Goswin von Brederlow
  0 siblings, 1 reply; 26+ messages in thread
From: Michael Evans @ 2009-11-03 16:28 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: NeilBrown, Jon Nelson, LinuxRaid

On Tue, Nov 3, 2009 at 5:07 AM, Goswin von Brederlow <goswin-v-b@web.de> wrote:
> "NeilBrown" <neilb@suse.de> writes:
>
>> A reshape is a fundamentally slow operation.  Each block needs to
>> be read and then written somewhere else so there is little opportunity
>> for streaming.
>> An in-place reshape (i.e the array doesn't get bigger or smaller) is
>> even slower as we have to take a backup copy of each range of blocks
>> before writing them back out.  This limits streaming even more.
>>
>> It is possible to get it fast than it is by increasing the
>> array's stripe_cache_size and also increasing the 'backup' size
>> that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
>> However it will still be significantly slower than e.g. a resync.
>>
>> So reshape will always be slow.  It is a completely different issue
>> to filesystem activity on a RAID array being slow.  Recent reports of
>> slowness are, I think, not directly related to md/raid.  It is either
>> the filesystem or the VM or a combination of the two that causes
>> these slowdowns.
>>
>>
>> NeilBrown
>
> Now why is that? Lets leave out the case of an in-place
> reshape. Nothing can be done to avoid making a backup of blocks
> there, which severly limits the speed.
>
> But the most common case should be growing an array. Lets look at the
> first few steps or 3->4 disk raid5 reshape. Each step denotes a point
> where a sync is required:
>
> Step 0         Step 1         Step 2         Step 3         Step 4
>  A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D
> 00 01  p  x    00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p
> 02  p 03  x    02  p 03  x    03 04  p 05    03 04  p 05    03 04  p 05
>  p 04 05  x     p 04 05  x     x  x  x  x    06  p 07 08    06  p 07 08
> 06 07  p  x    06 07  p  x    06 07  p  x     x  x  x  x     p 09 10 11
> 08  p 09  x    08  p 09  x    08  p 09  x    08  p 09  x     x  x  x  x
>  p 10 11  x     p 10 11  x     p 10 11  x     p 10 11  x     x  x  x  x
> 12 13  p  x    12 13  p  x    12 13  p  x    12 13  p  x    12 13  p  x
> 14  p 15  x    14  p 15  x    14  p 15  x    14  p 15  x    14  p 15  x
>  p 16 17  x     p 16 17  x     p 16 17  x     p 16 17  x     p 16 17  x
> 18 19  p  x    18 19  p  x    18 19  p  x    18 19  p  x    18 19  p  x
> 20  p 21  x    20  p 21  x    20  p 21  x    20  p 21  x    20  p 21  x
>  p 22 23  x     p 22 23  x     p 22 23  x     p 22 23  x     p 22 23  x
> 24 25  p  x    24 25  p  x    24 25  p  x    24 25  p  x    24 25  p  x
> 26  p 27  x    26  p 27  x    26  p 27  x    26  p 27  x    26  p 27  x
>  p 28 29  x     p 28 29  x     p 28 29  x     p 28 29  x     p 28 29  x
>
> Step 5         Step 6         Step 7         Step 8         Step 9
>  A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D
> 00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p
> 03 04  p 05    03 04  p 05    03 04  p 05    03 04  p 05    03 04  p 05
> 06  p 07 08    06  p 07 08    06  p 07 08    06  p 07 08    06  p 07 08
>  p 09 10 11     p 09 10 11     p 09 10 11     p 09 10 11     p 09 10 11
>  x  x  x  x    12 13 14  p    12 13 14  p    12 13 14  p    12 13 14  p
>  x  x  x  x    15 16  p 17    15 16  p 17    15 16  p 17    15 16  p 17
> 12 13  p  x     x  x  x  x    18  p 19 20    18  p 19 20    18  p 19 20
> 14  p 15  x     x  x  x  x     p 21 22 23     p 21 22 23     p 21 22 23
>  p 16 17  x     x  x  x  x    24 25 26  p    24 25 26  p    24 25 26  p
> 18 19  p  x    18 19  p  x     x  x  x  x    27 28  p 29    27 28  p 29
> 20  p 21  x    20  p 21  x     x  x  x  x    30  p 31 32    30  p 31 32
>  p 22 23  x     p 22 23  x     x  x  x  x     p 33 34 35     p 33 34 35
> 24 25  p  x    24 25  p  x     x  x  x  x    36 37 38  p    36 37 38  p
> 26  p 27  x    26  p 27  x    26  p 27  x     x  x  x  x    39 40  p 41
>  p 28 29  x     p 28 29  x     p 28 29  x     x  x  x  x    42  p 43 44
>
>
> In Step 0 and Step 1 the source and destination stripes overlap so a
> backup is required. But at Step 2 you have a full stripe to work with
> safely, at Step 4 2 stripes are save, Step 6 3 stripes and Step 7 4
> stripes. As you go the safe region gets larger and larger requiring
> less and less sync points.
>
> Idealy the raid reshape should read as much data from the source
> stripes as possible in one go and then write it all out in one
> go. Then rince and repeat. For a simple implementation why not do
> this:
>
> 1) read reshape-sync-size from proc/sys, default to 10% ram size
> 2) sync-size = min(reshape-sync-size, size of safe region)
> 3) setup internal mirror between old (read-write) and new stripes (write only)
> 4) read source blocks into stripe cache
> 5) compute new parity
> 6) put stripe into write cache
> 7) goto 3 until sync-size is reached
> 8) sync blocks to disk
> 9) record progress and remove internal mirror
> 10) goto 1
>
> Optionally in 9 you can skip recording the progress if the safe region
> is big enough for another read/write pass.
>
> The important idea behind this would be that, given enough free ram,
> there is a large linear read and large linear write alternating. Also,
> since the normal cache is used instead of the static stripe cache, if
> there is not enough ram then writes will be flushed out prematurely.
> This will lead to a degradation of performance but that is better than
> running out of memory.
>
> I have 4GB on my desktop with at least 3GB free if I'm not doing
> anything expensive. A raid-reshape should be able to do 3GB linear
> read and write alternatively. But I would already be happy if it would
> do 256MB. There is lots of opportunity for streaming. It might justbe
> hard to get the kernel IO system to cooperate.
>
> MfG
>        Goswin
>

Skimming your message I agree with the major points, however you're
only considering the best case scenario (which is how it probably
should run for performance).  There is also the worst-case scenario
where a device, driver, OS, or even power (supply lets say) fails in
mid-operation.

If there isn't a gap created due to the reshape (obviously it would
continue to grow the more the reshape proceeds) then it's still an 'in
place' operation (which I argue should be done in the largest block
possible within memory, but with data backed up on a device).

Growing operations obviously have free space on the new device, and
further as the operation proceeds there will be a growing gap between
the re-written data and the old copy of the data.

Shrinking operations, counter-intuitively, also have a growing area of
free space; at the end of the device.  Working backwards, after a
given number of stripes, the operation should be just as safe, if in
reverse, as a normal grow.

In any of the three cases, the largest possible write window per
device should be used to take advantage of the usual gains in speed.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-03 16:28             ` Michael Evans
@ 2009-11-03 19:26               ` Goswin von Brederlow
  0 siblings, 0 replies; 26+ messages in thread
From: Goswin von Brederlow @ 2009-11-03 19:26 UTC (permalink / raw)
  To: Michael Evans; +Cc: Goswin von Brederlow, NeilBrown, Jon Nelson, LinuxRaid

Michael Evans <mjevans1983@gmail.com> writes:

> On Tue, Nov 3, 2009 at 5:07 AM, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>> "NeilBrown" <neilb@suse.de> writes:
>>
>>> A reshape is a fundamentally slow operation.  Each block needs to
>>> be read and then written somewhere else so there is little opportunity
>>> for streaming.
>>> An in-place reshape (i.e the array doesn't get bigger or smaller) is
>>> even slower as we have to take a backup copy of each range of blocks
>>> before writing them back out.  This limits streaming even more.
>>>
>>> It is possible to get it fast than it is by increasing the
>>> array's stripe_cache_size and also increasing the 'backup' size
>>> that mdadm uses.  mdadm-3.1.1 will try to do better in this respect.
>>> However it will still be significantly slower than e.g. a resync.
>>>
>>> So reshape will always be slow.  It is a completely different issue
>>> to filesystem activity on a RAID array being slow.  Recent reports of
>>> slowness are, I think, not directly related to md/raid.  It is either
>>> the filesystem or the VM or a combination of the two that causes
>>> these slowdowns.
>>>
>>>
>>> NeilBrown
>>
>> Now why is that? Lets leave out the case of an in-place
>> reshape. Nothing can be done to avoid making a backup of blocks
>> there, which severly limits the speed.
>>
>> But the most common case should be growing an array. Lets look at the
>> first few steps or 3->4 disk raid5 reshape. Each step denotes a point
>> where a sync is required:
>>
>> Step 0         Step 1         Step 2         Step 3         Step 4
>>  A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D
>> 00 01  p  x    00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p
>> 02  p 03  x    02  p 03  x    03 04  p 05    03 04  p 05    03 04  p 05
>>  p 04 05  x     p 04 05  x     x  x  x  x    06  p 07 08    06  p 07 08
>> 06 07  p  x    06 07  p  x    06 07  p  x     x  x  x  x     p 09 10 11
>> 08  p 09  x    08  p 09  x    08  p 09  x    08  p 09  x     x  x  x  x
>>  p 10 11  x     p 10 11  x     p 10 11  x     p 10 11  x     x  x  x  x
>> 12 13  p  x    12 13  p  x    12 13  p  x    12 13  p  x    12 13  p  x
>> 14  p 15  x    14  p 15  x    14  p 15  x    14  p 15  x    14  p 15  x
>>  p 16 17  x     p 16 17  x     p 16 17  x     p 16 17  x     p 16 17  x
>> 18 19  p  x    18 19  p  x    18 19  p  x    18 19  p  x    18 19  p  x
>> 20  p 21  x    20  p 21  x    20  p 21  x    20  p 21  x    20  p 21  x
>>  p 22 23  x     p 22 23  x     p 22 23  x     p 22 23  x     p 22 23  x
>> 24 25  p  x    24 25  p  x    24 25  p  x    24 25  p  x    24 25  p  x
>> 26  p 27  x    26  p 27  x    26  p 27  x    26  p 27  x    26  p 27  x
>>  p 28 29  x     p 28 29  x     p 28 29  x     p 28 29  x     p 28 29  x
>>
>> Step 5         Step 6         Step 7         Step 8         Step 9
>>  A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D     A  B  C  D
>> 00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p    00 01 02  p
>> 03 04  p 05    03 04  p 05    03 04  p 05    03 04  p 05    03 04  p 05
>> 06  p 07 08    06  p 07 08    06  p 07 08    06  p 07 08    06  p 07 08
>>  p 09 10 11     p 09 10 11     p 09 10 11     p 09 10 11     p 09 10 11
>>  x  x  x  x    12 13 14  p    12 13 14  p    12 13 14  p    12 13 14  p
>>  x  x  x  x    15 16  p 17    15 16  p 17    15 16  p 17    15 16  p 17
>> 12 13  p  x     x  x  x  x    18  p 19 20    18  p 19 20    18  p 19 20
>> 14  p 15  x     x  x  x  x     p 21 22 23     p 21 22 23     p 21 22 23
>>  p 16 17  x     x  x  x  x    24 25 26  p    24 25 26  p    24 25 26  p
>> 18 19  p  x    18 19  p  x     x  x  x  x    27 28  p 29    27 28  p 29
>> 20  p 21  x    20  p 21  x     x  x  x  x    30  p 31 32    30  p 31 32
>>  p 22 23  x     p 22 23  x     x  x  x  x     p 33 34 35     p 33 34 35
>> 24 25  p  x    24 25  p  x     x  x  x  x    36 37 38  p    36 37 38  p
>> 26  p 27  x    26  p 27  x    26  p 27  x     x  x  x  x    39 40  p 41
>>  p 28 29  x     p 28 29  x     p 28 29  x     x  x  x  x    42  p 43 44
>>
>>
>> In Step 0 and Step 1 the source and destination stripes overlap so a
>> backup is required. But at Step 2 you have a full stripe to work with
>> safely, at Step 4 2 stripes are save, Step 6 3 stripes and Step 7 4
>> stripes. As you go the safe region gets larger and larger requiring
>> less and less sync points.
>>
>> Idealy the raid reshape should read as much data from the source
>> stripes as possible in one go and then write it all out in one
>> go. Then rince and repeat. For a simple implementation why not do
>> this:
>>
>> 1) read reshape-sync-size from proc/sys, default to 10% ram size
>> 2) sync-size = min(reshape-sync-size, size of safe region)
>> 3) setup internal mirror between old (read-write) and new stripes (write only)
>> 4) read source blocks into stripe cache
>> 5) compute new parity
>> 6) put stripe into write cache
>> 7) goto 3 until sync-size is reached
>> 8) sync blocks to disk
>> 9) record progress and remove internal mirror
>> 10) goto 1
>>
>> Optionally in 9 you can skip recording the progress if the safe region
>> is big enough for another read/write pass.
>>
>> The important idea behind this would be that, given enough free ram,
>> there is a large linear read and large linear write alternating. Also,
>> since the normal cache is used instead of the static stripe cache, if
>> there is not enough ram then writes will be flushed out prematurely.
>> This will lead to a degradation of performance but that is better than
>> running out of memory.
>>
>> I have 4GB on my desktop with at least 3GB free if I'm not doing
>> anything expensive. A raid-reshape should be able to do 3GB linear
>> read and write alternatively. But I would already be happy if it would
>> do 256MB. There is lots of opportunity for streaming. It might justbe
>> hard to get the kernel IO system to cooperate.
>>
>> MfG
>>        Goswin
>>
>
> Skimming your message I agree with the major points, however you're
> only considering the best case scenario (which is how it probably
> should run for performance).  There is also the worst-case scenario
> where a device, driver, OS, or even power (supply lets say) fails in
> mid-operation.
>
> If there isn't a gap created due to the reshape (obviously it would
> continue to grow the more the reshape proceeds) then it's still an 'in
> place' operation (which I argue should be done in the largest block
> possible within memory, but with data backed up on a device).

That is considered:
2) sync-size = min(reshape-sync-size, size of safe region)

At first the safe region is 0 and you need to backup some data. Then
the safe region is one stripe and things will go slowly. But as you
can see above and as you say the region quickly grows. I think the
region grows quickly enough that only a minimum of data needs to be
backuped up followed by a few slow iterations. It gets faster quickly
enough. But yeah, you can back up more at the start to get a larger
initial safe region.

> Growing operations obviously have free space on the new device, and
> further as the operation proceeds there will be a growing gap between
> the re-written data and the old copy of the data.
>
> Shrinking operations, counter-intuitively, also have a growing area of
> free space; at the end of the device.  Working backwards, after a
> given number of stripes, the operation should be just as safe, if in
> reverse, as a normal grow.
>
> In any of the three cases, the largest possible write window per
> device should be used to take advantage of the usual gains in speed.

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: unbelievably bad performance: 2.6.27.37 and raid6
  2009-11-01 23:47         ` Thomas Fjellstrom
  2009-11-01 23:53           ` Jon Nelson
  2009-11-01 23:55           ` Andrew Dunn
@ 2009-11-04 14:43           ` CoolCold
  2 siblings, 0 replies; 26+ messages in thread
From: CoolCold @ 2009-11-04 14:43 UTC (permalink / raw)
  To: tfjellstrom; +Cc: NeilBrown, Andrew Dunn, Jon Nelson, LinuxRaid, pernegger

I'm expiriencing MD lockups problems on Debian 2.6.26 kernel, while
2.6.28.8 looks not to have such problems.
This problem occurs when doing raid check, which is scheduled on 1st
Sunday of every month in Debian. Lock looks like - md resync speed
(really check speed) goes to 0 and all processes which access that
/dev/md are hunging like:

coolcold@tazeg:~$ cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdd3[0] sdc3[1]
290720192 blocks [2/2] [UU]
[>....................] resync = 0.9% (2906752/290720192)
finish=5796.8min speed=825K/sec


Nov 1 07:09:19 tazeg kernel: [2986195.439183] INFO: task xfssyncd:3099
blocked for more than 120 seconds.
Nov 1 07:09:19 tazeg kernel: [2986195.439218] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 1 07:09:19 tazeg kernel: [2986195.439264] xfssyncd D
0000000000000000 0 3099 2
Nov 1 07:09:19 tazeg kernel: [2986195.439301] ffff81042c451ba0
0000000000000046 0000000000000000 ffffffff802285b8
Nov 1 07:09:19 tazeg kernel: [2986195.439353] ffff81042dc5c990
ffff81042e5c3570 ffff81042dc5cc18 0000000500000001
Nov 1 07:09:19 tazeg kernel: [2986195.439403] 0000000000000282
0000000000000000 00000000ffffffff 0000000000000000
Nov 1 07:09:19 tazeg kernel: [2986195.439442] Call Trace:
Nov 1 07:09:19 tazeg kernel: [2986195.439497] [<ffffffff802285b8>]
__wake_up_common+0x41/0x74
Nov 1 07:09:19 tazeg kernel: [2986195.439532] [<ffffffffa0107371>]
:raid1:wait_barrier+0x87/0xc8
Nov 1 07:09:19 tazeg kernel: [2986195.439562] [<ffffffff8022c32f>]
default_wake_function+0x0/0xe
Nov 1 07:09:19 tazeg kernel: [2986195.439594] [<ffffffffa0108db4>]
:raid1:make_request+0x73/0x5af
Nov 1 07:09:19 tazeg kernel: [2986195.439625] [<ffffffff80229850>]
update_curr+0x44/0x6f
Nov 1 07:09:19 tazeg kernel: [2986195.439656] [<ffffffff8031eeab>]
__up_read+0x13/0x8a
Nov 1 07:09:19 tazeg kernel: [2986195.439686] [<ffffffff8030d7c4>]
generic_make_request+0x2fe/0x339
Nov 1 07:09:19 tazeg kernel: [2986195.439720] [<ffffffff80273970>]
mempool_alloc+0x24/0xda
Nov 1 07:09:19 tazeg kernel: [2986195.439748] [<ffffffff8031b105>]
__next_cpu+0x19/0x26
Nov 1 07:09:19 tazeg kernel: [2986195.439777] [<ffffffff80228e5a>]
find_busiest_group+0x254/0x6f5
Nov 1 07:09:19 tazeg kernel: [2986195.439810] [<ffffffff8030eb83>]
submit_bio+0xd9/0xe0
Nov 1 07:09:19 tazeg kernel: [2986195.439863] [<ffffffffa02878a7>]
:xfs:_xfs_buf_ioapply+0x206/0x231
Nov 1 07:09:19 tazeg kernel: [2986195.439915] [<ffffffffa0287908>]
:xfs:xfs_buf_iorequest+0x36/0x61
Nov 1 07:09:19 tazeg kernel: [2986195.439963] [<ffffffffa0270be1>]
:xfs:xlog_bdstrat_cb+0x16/0x3c
Nov 1 07:09:19 tazeg kernel: [2986195.440017] [<ffffffffa0271ae5>]
:xfs:xlog_sync+0x20a/0x3a1
Nov 1 07:09:19 tazeg kernel: [2986195.440068] [<ffffffffa027277a>]
:xfs:xlog_state_sync_all+0xb6/0x1c5
Nov 1 07:09:19 tazeg kernel: [2986195.440102] [<ffffffff8023d21a>]
lock_timer_base+0x26/0x4b
Nov 1 07:09:19 tazeg kernel: [2986195.440155] [<ffffffffa0272cce>]
:xfs:_xfs_log_force+0x58/0x67
Nov 1 07:09:19 tazeg kernel: [2986195.440187] [<ffffffff8042adf2>]
schedule_timeout+0x92/0xad
Nov 1 07:09:19 tazeg kernel: [2986195.440238] [<ffffffffa0272ce8>]
:xfs:xfs_log_force+0xb/0x2a
Nov 1 07:09:19 tazeg kernel: [2986195.440287] [<ffffffffa027e50b>]
:xfs:xfs_syncsub+0x33/0x226
Nov 1 07:09:19 tazeg kernel: [2986195.440337] [<ffffffffa028c7f7>]
:xfs:xfs_sync_worker+0x17/0x36
Nov 1 07:09:19 tazeg kernel: [2986195.440385] [<ffffffffa028d42d>]
:xfs:xfssyncd+0x133/0x187
Nov 1 07:09:19 tazeg kernel: [2986195.440433] [<ffffffffa028d2fa>]
:xfs:xfssyncd+0x0/0x187
Nov 1 07:09:19 tazeg kernel: [2986195.440466] [<ffffffff80246413>]
kthread+0x47/0x74
Nov 1 07:09:19 tazeg kernel: [2986195.440497] [<ffffffff8023030b>]
schedule_tail+0x27/0x5b
Nov 1 07:09:19 tazeg kernel: [2986195.440529] [<ffffffff8020cf28>]
child_rip+0xa/0x12
Nov 1 07:09:19 tazeg kernel: [2986195.440563] [<ffffffff802463cc>]
kthread+0x0/0x74
Nov 1 07:09:19 tazeg kernel: [2986195.440594] [<ffffffff8020cf1e>]
child_rip+0x0/0x12


The same was in 2.6.25.5, but additionally it has XFS issues ;)

On Mon, Nov 2, 2009 at 2:47 AM, Thomas Fjellstrom <tfjellstrom@shaw.ca> wrote:
>
> On Sun November 1 2009, NeilBrown wrote:
> > On Mon, November 2, 2009 6:41 am, Thomas Fjellstrom wrote:
> > > On Sun November 1 2009, Andrew Dunn wrote:
> > >> Are we to expect some resolution in newer kernels?
> > >
> > > I assume all of the new per-bdi-writeback work going on in .33+ will
> > > have a
> > > large impact. At least I'm hoping.
> > >
> > >> I am going to rebuild my array (backup data and re-create) to modify
> > >> the chunk size this week. I hope to get a much higher performance when
> > >> increasing from 64k chunk size to 1024k.
> > >>
> > >> Is there a way to modify chunk size in place or does the array need to
> > >> be re-created?
> > >
> > > This I'm not sure about. I'd like to be able to reshape to a new chunk
> > > size
> > > for testing.
> >
> > Reshaping to a new chunksize is possible with the latest mdadm and
> >  kernel, but I would recommend waiting for mdadm-3.1.1 and 2.6.32.
> > With the current code, a device failure during reshape followed by an
> > unclean shutdown while reshape is happening can lead to unrecoverable
> > data loss.  Even a clean shutdown before the shape finishes in that case
> > might be a problem.
>
> That's good to know. Though I'm stuck with 2.6.26 till the performance
> regressions in the io and scheduling subsystems are solved.
>
> > NeilBrown
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
>
>
> --
> Thomas Fjellstrom
> tfjellstrom@shaw.ca
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2009-11-04 14:43 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-31 15:55 unbelievably bad performance: 2.6.27.37 and raid6 Jon Nelson
2009-10-31 18:43 ` Thomas Fjellstrom
2009-11-01 19:37   ` Andrew Dunn
2009-11-01 19:41     ` Thomas Fjellstrom
2009-11-01 23:43       ` NeilBrown
2009-11-01 23:47         ` Thomas Fjellstrom
2009-11-01 23:53           ` Jon Nelson
2009-11-02  2:28             ` Neil Brown
2009-11-01 23:55           ` Andrew Dunn
2009-11-04 14:43           ` CoolCold
2009-10-31 19:59 ` Christian Pernegger
2009-11-02 19:39   ` Jon Nelson
2009-11-02 20:01     ` Christian Pernegger
2009-11-01  7:17 ` Kristleifur Daðason
2009-11-02 14:54 ` Bill Davidsen
2009-11-02 15:03   ` Jon Nelson
2009-11-03  5:36     ` NeilBrown
2009-11-03  6:09       ` Michael Evans
2009-11-03  6:28         ` NeilBrown
2009-11-03  6:39           ` Michael Evans
2009-11-03  6:46           ` Michael Evans
2009-11-03  9:16             ` NeilBrown
2009-11-03 13:07           ` Goswin von Brederlow
2009-11-03 16:28             ` Michael Evans
2009-11-03 19:26               ` Goswin von Brederlow
2009-11-02 18:51   ` Christian Pernegger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.