All of lore.kernel.org
 help / color / mirror / Atom feed
From: Adam Goryachev <mailinglists@websitemanagers.com.au>
To: stan@hardwarefreak.com
Cc: Dave Cundiff <syshackmin@gmail.com>, linux-raid@vger.kernel.org
Subject: Re: RAID performance - new kernel results - 5x SSD RAID5
Date: Thu, 21 Feb 2013 04:10:15 +1100	[thread overview]
Message-ID: <51250377.509@websitemanagers.com.au> (raw)
In-Reply-To: <51222A81.9080600@hardwarefreak.com>

On 19/02/13 00:20, Stan Hoeppner wrote:
> On 2/17/2013 3:52 AM, Adam Goryachev wrote:
> 
>>    READ: io=4096MB, aggrb=2242MB/s, minb=2296MB/s, maxb=2296MB/s,
>> mint=1827msec, maxt=1827msec
> 
>>   WRITE: io=4096MB, aggrb=560660KB/s, minb=574116KB/s, maxb=574116KB/s,
>> mint=7481msec, maxt=7481msec
> 
> Our read throughput is almost exactly 4x the write throughput.  At the
> hardware level, single SSD write throughput should only be ~10% lower
> than read.  Sequential writes w/RAID5 should not cause RMW cycles so
> that is not in play in these tests.  So why are writes so much slower?
> Knowing these things, where should we start looking for our performance
> killing needle in this haystack?
> 
> We know that the md/RAID5 driver still uses a single write thread in
> kernel 3.2.35.  And given we're pushing over 500MB/s through md/RAID5 to
> SSD storage, it's possible that this thread is eating all of one CPU
> core with both IOs and parity calculations, limiting write throughput.
> So that's the first place to look.  For your 7 second test run of FIO,
> we could do some crude instrumenting.  Assuming you have top setup to
> show individual Cpus (if not hit '1' in interactive mode to get them,
> then exit), we can grab top output twice a seconds for 10 seconds, in
> another terminal window.  So we do something like the following, giving
> 3 seconds to switch windows and launch FIO.  (Or one could do it in  a
> single window, writing a script to pipe the output of each to a file)
> 
> ~$ top -b -n 20 -d 0.5 |grep Cpu
> 
> yields 28 lines of this for 2 cores, 56 lies for 4 cores.
> 
> Cpu0 : 1.2%us, 0.5%sy, 1.8%ni, 96.4%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu1 : 1.1%us, 0.5%sy, 2.2%ni, 96.1%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu0 : 1.9%us, 1.9%sy, 0.0%ni, 96.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> 
> This will give us a good idea of what the cores are doing during the FIO
> run, as well as interrupt distribution, which CPUs are handling the
> lower level IO threads, how long we're waiting on the SSDs, etc.  If any
> core is at 98%+ during the run then md thread starvation is the problem.

Didn't quite work, I had to run the top command like this:
top -n20 -d 0.5 | grep Cpu
Then press 1 after it started, it didn't save the state when running it
interactively and then exiting.
Output is as follows:
Cpu0  :  0.1%us,  2.3%sy,  0.0%ni, 94.3%id,  3.0%wa,  0.0%hi,  0.3%si,
0.0%st
Cpu1  :  0.1%us,  0.5%sy,  0.0%ni, 98.5%id,  0.9%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.1%us,  0.2%sy,  0.0%ni, 99.3%id,  0.4%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 97.8%id,  1.9%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.1%us,  0.1%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.1%us,  0.3%sy,  0.0%ni, 98.0%id,  1.5%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.1%us,  0.1%sy,  0.0%ni, 97.4%id,  2.4%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.1%sy,  0.0%ni, 99.6%id,  0.2%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 47.9%sy,  0.0%ni, 50.0%id,  0.0%wa,  0.0%hi,  2.1%si,
0.0%st
Cpu1  :  0.0%us,  2.0%sy,  0.0%ni, 98.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  2.0%us, 35.3%sy,  0.0%ni,  0.0%id, 62.7%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  3.8%sy,  0.0%ni, 96.2%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 37.3%sy,  0.0%ni, 62.7%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us, 13.7%sy,  0.0%ni, 52.9%id, 33.3%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  2.0%us, 12.0%sy,  0.0%ni, 46.0%id, 40.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 26.0%sy,  0.0%ni, 44.0%id, 26.0%wa,  0.0%hi,  4.0%si,
0.0%st
Cpu1  :  0.0%us,  7.7%sy,  0.0%ni, 82.7%id,  9.6%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  4.0%sy,  0.0%ni, 86.0%id, 10.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  1.9%us,  1.9%sy,  0.0%ni, 96.2%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  3.9%sy,  0.0%ni, 13.7%id, 82.4%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 10.2%sy,  0.0%ni, 51.0%id, 38.8%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  2.0%sy,  0.0%ni, 86.0%id, 12.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.7%sy,  0.0%ni, 66.2%id, 33.1%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 15.7%sy,  0.0%ni, 39.2%id, 41.2%wa,  0.0%hi,  3.9%si,
0.0%st
Cpu1  :  0.0%us,  4.0%sy,  0.0%ni, 82.0%id, 14.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  1.9%us,  1.9%sy,  0.0%ni, 96.2%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.7%sy,  0.0%ni, 66.7%id, 32.7%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 12.2%sy,  0.0%ni, 55.1%id, 32.7%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  6.0%sy,  0.0%ni, 56.0%id, 38.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  1.9%sy,  0.0%ni, 98.1%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  1.9%sy,  0.0%ni, 98.1%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.7%sy,  0.0%ni, 66.2%id, 33.1%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 12.5%sy,  0.0%ni, 41.7%id, 45.8%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  6.2%sy,  0.0%ni, 89.6%id,  4.2%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni, 62.0%id, 38.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  2.0%sy,  0.0%ni, 78.4%id, 19.6%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  3.8%sy,  0.0%ni, 96.2%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  1.9%sy,  0.0%ni, 57.7%id, 40.4%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 13.7%sy,  0.0%ni, 33.3%id, 51.0%wa,  0.0%hi,  2.0%si,
0.0%st
Cpu1  :  0.0%us,  7.8%sy,  0.0%ni, 80.4%id, 11.8%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni, 66.7%id, 33.3%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 10.4%sy,  0.0%ni, 25.0%id, 62.5%wa,  0.0%hi,  2.1%si,
0.0%st
Cpu1  :  0.0%us,  8.0%sy,  0.0%ni, 88.0%id,  4.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.7%sy,  0.0%ni, 66.7%id, 32.7%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us,  6.5%sy,  0.0%ni, 21.7%id, 71.7%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  7.8%sy,  0.0%ni, 88.2%id,  3.9%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni, 66.7%id, 33.3%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  1.9%us,  1.9%sy,  0.0%ni, 96.2%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 14.0%sy,  0.0%ni, 34.0%id, 52.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  9.8%sy,  0.0%ni, 80.4%id,  9.8%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  1.3%sy,  0.0%ni, 65.8%id, 32.9%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  0.0%us,  2.0%sy,  0.0%ni, 98.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

Cpu0  :  0.0%us, 12.5%sy,  0.0%ni, 29.2%id, 58.3%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu1  :  0.0%us,  6.0%sy,  0.0%ni, 86.0%id,  8.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu3  :  0.0%us,  0.7%sy,  0.0%ni, 66.2%id, 33.1%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu4  :  2.0%us,  0.0%sy,  0.0%ni, 98.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st

There was more, very similar figures... apart from the second sample
above, there was never a single Cpu with close to 0% Idle, and I'm
assuming the %CPU in wa state is basically "idle" waiting for the disk
or something else to happen rather than the CPU actually being busy...

> (If you have hyperthreading enabled, reboot and disable it.  It normally
> decreases thread performance due to scheduling and context switching
> overhead, among other things.  Not to mention it makes determining
> actual CPU load more difficult.  In this exercise you'll needlessly have
> twice as many lines of output to comb through.)

I'll have to go in after hours to do that. Hopefully over the weekend
(BIOS setting and no remote KVM)... Can re-supply the results after that
if you think it will make a difference.

> If md is peaking a single core, the next step is to optimize the single
> thread performance.  There's not much you can do here but to optimize
> the parity calculation rate and tweak buffering.  I'm no expert on this
> but others here are.  IIRC you can tweak md to use the floating point
> registers and SSEx/AVX instructions.  These FP execution units in the
> CPU run in parallel to the integer units, and are 128 vs 64 bits wide
> (256 for AVX).  So not only is the number crunching speed increased, but
> it's done in parallel to the other instructions.  This makes the integer
> units more available.  You should also increase your stripe_cache_size
> if you haven't already.  Such optimizations won't help much overall--
> we're talking 5-20% maybe-- because the bottleneck lay elsewhere in the
> code.  Which brings us to...
> 
> The only other way I know of to increase single thread RAID5 write
> performance substantially is to grab a very recent kernel and Shaohua
> Li's patch set developed specifically for the single write thread
> problem on RAID1/10/5/6.  His test numbers show improvements of 130-200%
> increasing with drive count, but not linearly.  It is described here:
> http://lwn.net/Articles/500200/
> 
> With current distro kernels and lots of SSDs, the only way to
> significantly improve this single thread write performance is to use
> nested md/RAID0 over smaller arrays to increase the thread count and
> bring more cores into play.  With this you get one write thread per
> constituent array.  Each thread receive one core of performance.  The
> stripe over them has no threads and can scale to any numbers of cores.
> 
> Assuming you are currently write thread bound at ~560-600MB/s, adding
> one more Intel SSD for 6 total gives us...
> 
> RAID0 over 3 RAID1, 3 threads-- should yield read speed between 1.5 and
> 3GB/s depending on load, and increase your write speed to 1.6GB/s, for
> the loss of 480G capacity.
> 
> RAID0 over 2 RAID5, 2 threads-- should yield between 2.2 and 2.6GB/s
> read speed, and increase your write speed to ~1.1GB/s, for no change in
> capacity.
> 
> Again, these numbers assume the low write performance is due to thread
> starvation.

I don't think it is from my measurements...

> The downside for both:  Neither of these configurations can be expanded
> with a reshape and thus drives cannot be added.  That can be achieved by
> using a linear layer atop these RAID0 devices, and adding new md devices
> to the linear array later.  With this you don't get automatic even
> distribution of IO for the linear array, but only for the constituent
> striped arrays.  This isn't a bad tradeoff when IO flow analysis and
> architectural planning are performed before a system is deployed.

I'll disable the hyperthreading, and re-test afterwards, but I'm not
sure that will produce much of a result. Let me know if you think I
should run any other tests to track it down...

One thing I can see is a large number of interrupts and context switches
which looks like it happened at the same time as a backup run. Perhaps I
am getting too many interrrupts on the network cards or the SATA controller?

Thanks,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au

  reply	other threads:[~2013-02-20 17:10 UTC|newest]

Thread overview: 131+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-07  6:48 RAID performance Adam Goryachev
2013-02-07  6:51 ` Adam Goryachev
2013-02-07  8:24   ` Stan Hoeppner
2013-02-07  7:02 ` Carsten Aulbert
2013-02-07 10:12   ` Adam Goryachev
2013-02-07 10:29     ` Carsten Aulbert
2013-02-07 10:41       ` Adam Goryachev
2013-02-07  8:11 ` Stan Hoeppner
2013-02-07 10:05   ` Adam Goryachev
2013-02-16  4:33     ` RAID performance - *Slow SSDs likely solved* Stan Hoeppner
     [not found]       ` <cfefe7a6-a13f-413c-9e3d-e061c68dc01b@email.android.com>
2013-02-17  5:01         ` Stan Hoeppner
2013-02-08  7:21   ` RAID performance Adam Goryachev
2013-02-08  7:37     ` Chris Murphy
2013-02-08 13:04     ` Stan Hoeppner
2013-02-07  9:07 ` Dave Cundiff
2013-02-07 10:19   ` Adam Goryachev
2013-02-07 11:07     ` Dave Cundiff
2013-02-07 12:49       ` Adam Goryachev
2013-02-07 12:53         ` Phil Turmel
2013-02-07 12:58           ` Adam Goryachev
2013-02-07 13:03             ` Phil Turmel
2013-02-07 13:08               ` Adam Goryachev
2013-02-07 13:20                 ` Mikael Abrahamsson
2013-02-07 22:03               ` Chris Murphy
2013-02-07 23:48                 ` Chris Murphy
2013-02-08  0:02                   ` Chris Murphy
2013-02-08  6:25                     ` Adam Goryachev
2013-02-08  7:35                       ` Chris Murphy
2013-02-08  8:34                         ` Chris Murphy
2013-02-08 14:31                           ` Adam Goryachev
2013-02-08 14:19                         ` Adam Goryachev
2013-02-08  6:15                   ` Adam Goryachev
2013-02-07 15:32         ` Dave Cundiff
2013-02-08 13:58           ` Adam Goryachev
2013-02-08 21:42             ` Stan Hoeppner
2013-02-14 22:42               ` Chris Murphy
2013-02-15  1:10                 ` Adam Goryachev
2013-02-15  1:40                   ` Chris Murphy
2013-02-15  4:01                     ` Adam Goryachev
2013-02-15  5:14                       ` Chris Murphy
2013-02-15 11:10                         ` Adam Goryachev
2013-02-15 23:01                           ` Chris Murphy
2013-02-17  9:52             ` RAID performance - new kernel results Adam Goryachev
2013-02-18 13:20               ` RAID performance - new kernel results - 5x SSD RAID5 Stan Hoeppner
2013-02-20 17:10                 ` Adam Goryachev [this message]
2013-02-21  6:04                   ` Stan Hoeppner
2013-02-21  6:40                     ` Adam Goryachev
2013-02-21  8:47                       ` Joseph Glanville
2013-02-22  8:10                       ` Stan Hoeppner
2013-02-24 20:36                         ` Stan Hoeppner
2013-03-01 16:06                           ` Adam Goryachev
2013-03-02  9:15                             ` Stan Hoeppner
2013-03-02 17:07                               ` Phil Turmel
2013-03-02 23:48                                 ` Stan Hoeppner
2013-03-03  2:35                                   ` Phil Turmel
2013-03-03 15:19                                 ` Adam Goryachev
2013-03-04  1:31                                   ` Phil Turmel
2013-03-04  9:39                                     ` Adam Goryachev
2013-03-04 12:41                                       ` Phil Turmel
2013-03-04 12:42                                       ` Stan Hoeppner
2013-03-04  5:25                                   ` Stan Hoeppner
2013-03-03 17:32                               ` Adam Goryachev
2013-03-04 12:20                                 ` Stan Hoeppner
2013-03-04 16:26                                   ` Adam Goryachev
2013-03-05  9:30                                     ` RAID performance - 5x SSD RAID5 - effects of stripe cache sizing Stan Hoeppner
2013-03-05 15:53                                       ` Adam Goryachev
2013-03-07  7:36                                         ` Stan Hoeppner
2013-03-08  0:17                                           ` Adam Goryachev
2013-03-08  4:02                                             ` Stan Hoeppner
2013-03-08  5:57                                               ` Mikael Abrahamsson
2013-03-08 10:09                                                 ` Stan Hoeppner
2013-03-08 14:11                                                   ` Mikael Abrahamsson
2013-02-21 17:41                     ` RAID performance - new kernel results - 5x SSD RAID5 David Brown
2013-02-23  6:41                       ` Stan Hoeppner
2013-02-23 15:57               ` RAID performance - new kernel results John Stoffel
2013-03-01 16:10                 ` Adam Goryachev
2013-03-10 15:35                   ` Charles Polisher
2013-04-15 12:23                     ` Adam Goryachev
2013-04-15 15:31                       ` John Stoffel
2013-04-17 10:15                         ` Adam Goryachev
2013-04-15 16:49                       ` Roy Sigurd Karlsbakk
2013-04-15 20:16                       ` Phil Turmel
2013-04-16 19:28                         ` Roy Sigurd Karlsbakk
2013-04-16 21:03                           ` Phil Turmel
2013-04-16 21:43                           ` Stan Hoeppner
2013-04-15 20:42                       ` Stan Hoeppner
2013-02-08  3:32       ` RAID performance Stan Hoeppner
2013-02-08  7:11         ` Adam Goryachev
2013-02-08 17:10           ` Stan Hoeppner
2013-02-08 18:44             ` Adam Goryachev
2013-02-09  4:09               ` Stan Hoeppner
2013-02-10  4:40                 ` Adam Goryachev
2013-02-10 13:22                   ` Stan Hoeppner
2013-02-10 16:16                     ` Adam Goryachev
2013-02-10 17:19                       ` Mikael Abrahamsson
2013-02-10 21:57                         ` Adam Goryachev
2013-02-11  3:41                           ` Adam Goryachev
2013-02-11  4:33                           ` Mikael Abrahamsson
2013-02-12  2:46                       ` Stan Hoeppner
2013-02-12  5:33                         ` Adam Goryachev
2013-02-13  7:56                           ` Stan Hoeppner
2013-02-13 13:48                             ` Phil Turmel
2013-02-13 16:17                             ` Adam Goryachev
2013-02-13 20:20                               ` Adam Goryachev
2013-02-14 12:22                                 ` Stan Hoeppner
2013-02-15 13:31                                   ` Stan Hoeppner
2013-02-15 14:32                                     ` Adam Goryachev
2013-02-16  1:07                                       ` Stan Hoeppner
2013-02-16 17:19                                         ` Adam Goryachev
2013-02-17  1:42                                           ` Stan Hoeppner
2013-02-17  5:02                                             ` Adam Goryachev
2013-02-17  6:28                                               ` Stan Hoeppner
2013-02-17  8:41                                                 ` Adam Goryachev
2013-02-17 13:58                                                   ` Stan Hoeppner
2013-02-17 14:46                                                     ` Adam Goryachev
2013-02-19  8:17                                                       ` Stan Hoeppner
2013-02-20 16:45                                                         ` Adam Goryachev
2013-02-21  0:45                                                           ` Stan Hoeppner
2013-02-21  3:10                                                             ` Adam Goryachev
2013-02-22 11:19                                                               ` Stan Hoeppner
2013-02-22 15:25                                                                 ` Charles Polisher
2013-02-23  4:14                                                                   ` Stan Hoeppner
2013-02-12  7:34                         ` Mikael Abrahamsson
2013-02-08  7:17         ` Adam Goryachev
2013-02-07 12:01     ` Brad Campbell
2013-02-07 12:37       ` Adam Goryachev
2013-02-07 17:12         ` Fredrik Lindgren
2013-02-08  0:00           ` Adam Goryachev
2013-02-11 19:49   ` Roy Sigurd Karlsbakk
2013-02-11 20:30     ` Dave Cundiff
2013-02-07 11:32 ` Mikael Abrahamsson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51250377.509@websitemanagers.com.au \
    --to=mailinglists@websitemanagers.com.au \
    --cc=linux-raid@vger.kernel.org \
    --cc=stan@hardwarefreak.com \
    --cc=syshackmin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.