All of lore.kernel.org
 help / color / mirror / Atom feed
* Single HDD , 700MB/s ??
@ 2012-06-27  2:56 Homer Li
  2012-06-27 17:02 ` Martin Steigerwald
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Homer Li @ 2012-06-27  2:56 UTC (permalink / raw)
  To: fio

Hello ,All;

        When I used fio benchmark single HDD,  the write speed could
reached 700~800MB/s
        The HDD model is WD2002FYPS-18U1B0 , 7200rpm, 2TB
        In my raid config, there is only one HDD  in every Raid0
group. like jbod.

        And then, I modify numjobs=8 to numjobs=1. the benchmark
result is ok , it 's about 100MB/s.

        Does any wrong in my fio config ?


Raid controller: PERC 6/E (LSI SAS1068E)
OS : CentOS 5.5 upgrade to 2.6.18-308.8.2.el5 x86_64

# fio -v
fio 2.0.7


# cat /tmp/fio2.cfg
[global]
rw=write
ioengine=sync
size=100g
numjobs=8
bssplit=128k/20:256k/20:512k/20:1024k/40
nrfiles=2
iodepth=128
lockmem=1g
zero_buffers
timeout=120
direct=1
thread

[sdb]
filename=/dev/sdb

run
#fio /tmp/fio2.cfg


Run status group 0 (all jobs):
  WRITE: io=68235MB, aggrb=582254KB/s, minb=72636KB/s, maxb=73052KB/s,
mint=120001msec, maxt=120003msec


#iostat -x 2

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz
avgqu-sz   await  svctm  %util
sdb               0.00     0.00  0.00 2399.50     0.00   571.38
487.67    18.23    7.60   0.42 100.05

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.38    0.00    1.12   19.00    0.00   77.50

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz
avgqu-sz   await  svctm  %util
sdb               0.00     0.00  0.00 3231.00     0.00   771.19
488.82    18.29    5.66   0.31 100.05

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.19    0.00    1.06   19.36    0.00   77.39

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz
avgqu-sz   await  svctm  %util
sdb               0.00     0.00  0.00 3212.00     0.00   769.94
490.92    18.69    5.82   0.31 100.05

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.50    0.00    0.69   20.06    0.00   77.75

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz
avgqu-sz   await  svctm  %util
sdb               0.00     0.00  0.00 1937.50     0.00   466.06
492.64    19.18    9.91   0.52 100.05

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.06    0.00    0.94   19.94    0.00   77.06

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz
avgqu-sz   await  svctm  %util
sdb               0.00     0.00  0.00 2789.00     0.00   668.50
490.89    18.72    6.71   0.36 100.05




Here is my Raid controller config:

this is sdb:

DISK GROUP: 0
Number of Spans: 1
SPAN: 0
Span Reference: 0x00
Number of PDs: 1
Number of VDs: 1
Number of dedicated Hotspares: 0
Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-0, Secondary-0, RAID Level Qualifier-0
Size                : 1.818 TB
State               : Optimal
Stripe Size         : 64 KB
Number Of Drives    : 1
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Physical Disk Information:
Physical Disk: 0
Enclosure Device ID: 16
Slot Number: 0
Device Id: 31
Sequence Number: 4
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 1.819 TB [0xe8e088b0 Sectors]
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Online, Spun Up
SAS Address(0): 0x5a4badb20bf57f88
Connected Port Number: 4(path0)
Inquiry Data:      WD-WCAVY1070166WDC WD2002FYPS-18U1B0
   05.05G07
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: Unknown
Link Speed: Unknown
Media Type: Hard Disk Device

this is sdc :

DISK GROUP: 1
Number of Spans: 1
SPAN: 0
Span Reference: 0x01
Number of PDs: 1
Number of VDs: 1
Number of dedicated Hotspares: 0
Virtual Drive Information:
Virtual Drive: 0 (Target Id: 1)
Name                :
RAID Level          : Primary-0, Secondary-0, RAID Level Qualifier-0
Size                : 1.818 TB
State               : Optimal
Stripe Size         : 64 KB
Number Of Drives    : 1
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Physical Disk Information:
Physical Disk: 0
Enclosure Device ID: 16
Slot Number: 1
Device Id: 28
Sequence Number: 4
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 1.819 TB [0xe8e088b0 Sectors]
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Online, Spun Up
SAS Address(0): 0x5a4badb20bf57f87
Connected Port Number: 4(path0)
Inquiry Data:      WD-WCAVY1090906WDC WD2002FYPS-18U1B0
   05.05G07
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: Unknown
Link Speed: Unknown
Media Type: Hard Disk Device

.......................................................




Best Regards
HOmer Li

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Single HDD , 700MB/s ??
  2012-06-27  2:56 Single HDD , 700MB/s ?? Homer Li
@ 2012-06-27 17:02 ` Martin Steigerwald
  2012-06-28  8:15   ` Homer Li
  2012-06-28 18:11 ` Kyle Hailey
       [not found] ` <CACiQ3FACDJxKwvzmtQfvaj=iz6dvJe8bbCGLoCk4HHHpCZPyDg@mail.gmail.com>
  2 siblings, 1 reply; 6+ messages in thread
From: Martin Steigerwald @ 2012-06-27 17:02 UTC (permalink / raw)
  To: Homer Li; +Cc: fio

Am Mittwoch, 27. Juni 2012 schrieb Homer Li:
> Hello ,All;
> 
>         When I used fio benchmark single HDD,  the write speed could
> reached 700~800MB/s
>         The HDD model is WD2002FYPS-18U1B0 , 7200rpm, 2TB
>         In my raid config, there is only one HDD  in every Raid0
> group. like jbod.
> 
>         And then, I modify numjobs=8 to numjobs=1. the benchmark
> result is ok , it 's about 100MB/s.

Which is still quite fast. However that difference is puzzling.
 
> Raid controller: PERC 6/E (LSI SAS1068E)
> OS : CentOS 5.5 upgrade to 2.6.18-308.8.2.el5 x86_64
> 
> # fio -v
> fio 2.0.7
> 
> 
> # cat /tmp/fio2.cfg
> [global]
> rw=write
> ioengine=sync
> size=100g
> numjobs=8
> bssplit=128k/20:256k/20:512k/20:1024k/40
> nrfiles=2
> iodepth=128
> lockmem=1g
> zero_buffers
> timeout=120
> direct=1
> thread
> 
> [sdb]
> filename=/dev/sdb

I wonder whether this could be ioengine related, although I do not read
any limitations for sync engine regarding direct I/O out of the HOWTO:

direct=bool     If value is true, use non-buffered io. This is usually
                O_DIRECT. Note that ZFS on Solaris doesn't support direct io.
                On Windows the synchronous ioengines don't support direct io.

Please try with ioengine=libaio nonetheless.

> run
> #fio /tmp/fio2.cfg
> 
> 
> Run status group 0 (all jobs):
>   WRITE: io=68235MB, aggrb=582254KB/s, minb=72636KB/s, maxb=73052KB/s,
> mint=120001msec, maxt=120003msec

[…]

> Here is my Raid controller config:
> 
> this is sdb:
[…]
> Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
> if Bad BBU
> Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
> if Bad BBU

Or your RAID controller is caching the writes. I am not sure about the
direct option tough. Last time I had to do with a hardware RAID controller
has been quite some time ago.

Sounds more likely to me, but should affect numjobs=1 as well. Please 
try disabling the RAID controller caching or connect the disk you want
to test to a controller without caching.

Third thing could be some compression being active, but that seems unlikely,
cause it should affect the numjobs=1 workload as well. I never heard of any
compressing harddisk firmware, but newer Sandforce SSDs are doing it.

Still the RAID controller caching could play tricks with the zeros you
send to it with "zero_buffers". I would try without to make sure.

And then there could be a bug in fio.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Single HDD , 700MB/s ??
  2012-06-27 17:02 ` Martin Steigerwald
@ 2012-06-28  8:15   ` Homer Li
  2012-06-28 13:18     ` Martin Steigerwald
  0 siblings, 1 reply; 6+ messages in thread
From: Homer Li @ 2012-06-28  8:15 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: fio

Hi ,Martin,

     When I enabled the Raid controller cache, there is not much
different in the libaio and sync.

     When I disabled the Raid controller cache, the HDD speed is not crazy.

     And then, I disabled raid controller cache and HDD cache.
     1 write thread in sync engine is the slowest.

     By the way, you said ,it could be some compression being active,
     Is it possible some compress in raid controller cache ? because
when I disabled raid controller cache and  enabled HDD cache.  8 write
threads is near close 1 write thread.

    Thanks for your help.  ^_^

Detail :

Disabled raid controller and HDD cache:

8 jobs libaio
Run status group 0 (all jobs):
  WRITE: io=10084MB, aggrb=85652KB/s, minb=9935KB/s, maxb=11208KB/s,
mint=120424msec, maxt=120554msec

1 job libaio
Run status group 0 (all jobs):
 WRITE: io=13443MB, aggrb=114403KB/s, minb=114403KB/s,
maxb=114403KB/s, mint=120326msec, maxt=120326msec

8 jobs sync
Run status group 0 (all jobs):
  WRITE: io=4811.2MB, aggrb=52954KB/s, minb=6227KB/s, maxb=7043KB/s,
mint=92948msec, maxt=93035msec

1job sync
Run status group 0 (all jobs):
  WRITE: io=4236.0MB, aggrb=36143KB/s, minb=36143KB/s, maxb=36143KB/s,
mint=120013msec, maxt=120013msec

enable disk cache:
1 job sync:
Run status group 0 (all jobs):
  WRITE: io=5602.3MB, aggrb=114722KB/s, minb=114722KB/s,
maxb=114722KB/s, mint=50005msec, maxt=50005msec

8 jobs sync:
Run status group 0 (all jobs):
  WRITE: io=3998.3MB, aggrb=81843KB/s, minb=10039KB/s, maxb=10590KB/s,
mint=50010msec, maxt=50025msec

1 jobs libaio:
Run status group 0 (all jobs):
  WRITE: io=5633.8MB, aggrb=114586KB/s, minb=114586KB/s,
maxb=114586KB/s, mint=50346msec, maxt=50346msec

8 jobs libaio:
Run status group 0 (all jobs):
  WRITE: io=4583.7MB, aggrb=92884KB/s, minb=11405KB/s, maxb=12263KB/s,
mint=50470msec, maxt=50532mse


sdb raid config:
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-0, Secondary-0, RAID Level Qualifier-0
Size                : 1.818 TB
State               : Optimal
Stripe Size         : 64 KB
Number Of Drives    : 1
Span Depth          : 1
Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write
Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write
Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None



2012/6/28 Martin Steigerwald <Martin@lichtvoll.de>:
> Am Mittwoch, 27. Juni 2012 schrieb Homer Li:
>> Hello ,All;
>>
>>         When I used fio benchmark single HDD,  the write speed could
>> reached 700~800MB/s
>>         The HDD model is WD2002FYPS-18U1B0 , 7200rpm, 2TB
>>         In my raid config, there is only one HDD  in every Raid0
>> group. like jbod.
>>
>>         And then, I modify numjobs=8 to numjobs=1. the benchmark
>> result is ok , it 's about 100MB/s.
>
> Which is still quite fast. However that difference is puzzling.
>
>> Raid controller: PERC 6/E (LSI SAS1068E)
>> OS : CentOS 5.5 upgrade to 2.6.18-308.8.2.el5 x86_64
>>
>> # fio -v
>> fio 2.0.7
>>
>>
>> # cat /tmp/fio2.cfg
>> [global]
>> rw=write
>> ioengine=sync
>> size=100g
>> numjobs=8
>> bssplit=128k/20:256k/20:512k/20:1024k/40
>> nrfiles=2
>> iodepth=128
>> lockmem=1g
>> zero_buffers
>> timeout=120
>> direct=1
>> thread
>>
>> [sdb]
>> filename=/dev/sdb
>
> I wonder whether this could be ioengine related, although I do not read
> any limitations for sync engine regarding direct I/O out of the HOWTO:
>
> direct=bool     If value is true, use non-buffered io. This is usually
>                O_DIRECT. Note that ZFS on Solaris doesn't support direct
> io.
>                On Windows the synchronous ioengines don't support direct
> io.
>
> Please try with ioengine=libaio nonetheless.
>
>> run
>> #fio /tmp/fio2.cfg
>>
>>
>> Run status group 0 (all jobs):
>>   WRITE: io=68235MB, aggrb=582254KB/s, minb=72636KB/s, maxb=73052KB/s,
>> mint=120001msec, maxt=120003msec
>
> […]
>
>> Here is my Raid controller config:
>>
>> this is sdb:
> […]
>> Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
>> if Bad BBU
>> Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
>> if Bad BBU
>
> Or your RAID controller is caching the writes. I am not sure about the
> direct option tough. Last time I had to do with a hardware RAID controller
> has been quite some time ago.
>
> Sounds more likely to me, but should affect numjobs=1 as well. Please
> try disabling the RAID controller caching or connect the disk you want
> to test to a controller without caching.
>
> Third thing could be some compression being active, but that seems
> unlikely,
> cause it should affect the numjobs=1 workload as well. I never heard of
> any
> compressing harddisk firmware, but newer Sandforce SSDs are doing it.
>
> Still the RAID controller caching could play tricks with the zeros you
> send to it with "zero_buffers". I would try without to make sure.
>
> And then there could be a bug in fio.
>
> --
> Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
> GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7



--
Let one live alone doing no evil, care-free, like an elephant in the
elephant forest

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Single HDD , 700MB/s ??
  2012-06-28  8:15   ` Homer Li
@ 2012-06-28 13:18     ` Martin Steigerwald
  0 siblings, 0 replies; 6+ messages in thread
From: Martin Steigerwald @ 2012-06-28 13:18 UTC (permalink / raw)
  To: Homer Li; +Cc: fio

Am Donnerstag, 28. Juni 2012 schrieb Homer Li:
> Hi ,Martin,

Hi Homer,

I prefer http://learn.to/quote (there is a english text as well).

>      When I enabled the Raid controller cache, there is not much
> different in the libaio and sync.

No wonder.

>      When I disabled the Raid controller cache, the HDD speed is not
> crazy.
> 
>      And then, I disabled raid controller cache and HDD cache.
>      1 write thread in sync engine is the slowest.

I do not know at the moment why sync and libaio are that different. Oh, 
well maybe I do: iodepth. I think sync engine cannot do iodepth!=1. I 
think this should be documented somewhere, so you can verify my saying.

>      By the way, you said ,it could be some compression being active,
>      Is it possible some compress in raid controller cache ? because
> when I disabled raid controller cache and  enabled HDD cache.  8 write
> threads is near close 1 write thread.

Its just some wild guessing. To verify remove zero_buffers and use random 
data. That should at least decrease any possible compression effects.

Maybe the RAID Controller cache is playing other tricks. I do not know. I 
am no expert regarding hardware RAID controllers and their firmwares.

> 
>     Thanks for your help.  ^_^
> 
> Detail :
> 
> Disabled raid controller and HDD cache:
> 
> 8 jobs libaio
> Run status group 0 (all jobs):
>   WRITE: io=10084MB, aggrb=85652KB/s, minb=9935KB/s, maxb=11208KB/s,
> mint=120424msec, maxt=120554msec
> 
> 1 job libaio
> Run status group 0 (all jobs):
>  WRITE: io=13443MB, aggrb=114403KB/s, minb=114403KB/s,
> maxb=114403KB/s, mint=120326msec, maxt=120326msec

Sounds sensible. One job is still quite fast. And 8 jobs should generate 
some seeks.

> 8 jobs sync
> Run status group 0 (all jobs):
>   WRITE: io=4811.2MB, aggrb=52954KB/s, minb=6227KB/s, maxb=7043KB/s,
> mint=92948msec, maxt=93035msec
> 
> 1job sync
> Run status group 0 (all jobs):
>   WRITE: io=4236.0MB, aggrb=36143KB/s, minb=36143KB/s, maxb=36143KB/s,
> mint=120013msec, maxt=120013msec

Look at iodepth statistics. I think it will use higher iodepths, i.e. more 
requests in flight and that gives chance to reorder requests and whatnot. 
Although with disable write cache the harddisk firmware should not be able 
to do it, Linux might give presorted requests. I am a bit unsure about the 
exact behavior here.

> enable disk cache:
> 1 job sync:
> Run status group 0 (all jobs):
>   WRITE: io=5602.3MB, aggrb=114722KB/s, minb=114722KB/s,
> maxb=114722KB/s, mint=50005msec, maxt=50005msec
> 
> 8 jobs sync:
> Run status group 0 (all jobs):
>   WRITE: io=3998.3MB, aggrb=81843KB/s, minb=10039KB/s, maxb=10590KB/s,
> mint=50010msec, maxt=50025msec
> 
> 1 jobs libaio:
> Run status group 0 (all jobs):
>   WRITE: io=5633.8MB, aggrb=114586KB/s, minb=114586KB/s,
> maxb=114586KB/s, mint=50346msec, maxt=50346msec
> 
> 8 jobs libaio:
> Run status group 0 (all jobs):
>   WRITE: io=4583.7MB, aggrb=92884KB/s, minb=11405KB/s, maxb=12263KB/s,
> mint=50470msec, maxt=50532mse

Again verify the iodepth.

Also look in hdparm -I on the disk for the iodepth the disk supports and 
try with twice as much in order to always satisfy the queue to the 
harddisk.

> sdb raid config:
> Adapter 0 -- Virtual Drive Information:
> Virtual Drive: 0 (Target Id: 0)
> Name                :
> RAID Level          : Primary-0, Secondary-0, RAID Level Qualifier-0
> Size                : 1.818 TB
> State               : Optimal
> Stripe Size         : 64 KB
> Number Of Drives    : 1
> Span Depth          : 1
> Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write
> Cache if Bad BBU
> Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write
> Cache if Bad BBU
> Access Policy       : Read/Write
> Disk Cache Policy   : Disabled
> Encryption Type     : None

Sounds sane if you want to test without caching.

For production workloads and production workload performance measurements 
I recommend to turn write caching on again in case your controller has a 
battery (BBU). In that case disable barriers / cache flushes in the 
filesystem to get further speed boosts.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Single HDD , 700MB/s ??
  2012-06-27  2:56 Single HDD , 700MB/s ?? Homer Li
  2012-06-27 17:02 ` Martin Steigerwald
@ 2012-06-28 18:11 ` Kyle Hailey
       [not found] ` <CACiQ3FACDJxKwvzmtQfvaj=iz6dvJe8bbCGLoCk4HHHpCZPyDg@mail.gmail.com>
  2 siblings, 0 replies; 6+ messages in thread
From: Kyle Hailey @ 2012-06-28 18:11 UTC (permalink / raw)
  To: Homer Li; +Cc: fio

This example brings up some interesting questions for me:

    [global]
    rw=write
    ioengine=sync
    size=100g
    numjobs=8
    nrfiles=2
    iodepth=128
    zero_buffers
    timeout=120
    direct=1
    [sdb]
    filename=/dev/sdb


Why use�zero_buffers instead of random when there is a possibility of
a layer optimizing the zero writes?

Does Iodepth make any difference with ioengine=sync? Or is iodepth
just there for the other async tests?

In the above list, there is only one file listed (albeit a raw device).
Does it make any difference that that

    nrfiles=2

If there is just one file, and there are no offsets and the test is
sequential writes, then will all the jobs write to the same location
roughly at the same time? And in this case even with DIRECT, couldn't
the OS do some thing to optimize these writes?

I've had issues with "Direct=1" as well as the mounts in direct mode
where multiple readers still ran at caching speeds. �It was like the
OS sees all these readers reading the same block and even with DIRECT
the OS still gives out the block in memory. Sort of make sense. But
when giving each reader an offset the speeds went down to reasonable.
I've changed all my multi user tests that use one file to use offsets
where the offsets are distributed throughout the file. (would be a
nice automatic option in fio).
I use one 8G file �in all my tests so I can just create one file, the
longest step of the test, then reuse it with varying # of jobs. If it
was a file per job then to �have the same effect �the test would
create N files at 8G/N size which would cause the files to be be
recreated every time.

Also:

What's the difference between timeout vs runtime
timeout=timeoutLimit run time to�timeout�seconds.
runtime=intTerminate processing after the specified number of seconds.

- Kyle
dboptimizer.com

On Tue, Jun 26, 2012 at 7:56 PM, Homer Li <01jay.ly@gmail.com> wrote:
>
> Hello ,All;
>
> � � � �When I used fio benchmark single HDD, �the write speed could
> reached 700~800MB/s
> � � � �The HDD model is WD2002FYPS-18U1B0 , 7200rpm, 2TB
> � � � �In my raid config, there is only one HDD �in every Raid0
> group. like jbod.
>
> � � � �And then, I modify numjobs=8 to numjobs=1. the benchmark
> result is ok , it 's about 100MB/s.
>
> � � � �Does any wrong in my fio config ?
>
>
> Raid controller: PERC 6/E (LSI SAS1068E)
> OS : CentOS 5.5 upgrade to 2.6.18-308.8.2.el5 x86_64
>
> # fio -v
> fio 2.0.7
>
>
> # cat /tmp/fio2.cfg
> [global]
> rw=write
> ioengine=sync
> size=100g
> numjobs=8
> bssplit=128k/20:256k/20:512k/20:1024k/40
> nrfiles=2
> iodepth=128
> lockmem=1g
> zero_buffers
> timeout=120
> direct=1
> thread
>
> [sdb]
> filename=/dev/sdb
>
> run
> #fio /tmp/fio2.cfg
>
>
> Run status group 0 (all jobs):
> �WRITE: io=68235MB, aggrb=582254KB/s, minb=72636KB/s, maxb=73052KB/s,
> mint=120001msec, maxt=120003msec
>
>
> #iostat -x 2
>
> Device: � � � � rrqm/s � wrqm/s � r/s � w/s � �rMB/s � �wMB/s avgrq-sz
> avgqu-sz � await �svctm �%util
> sdb � � � � � � � 0.00 � � 0.00 �0.00 2399.50 � � 0.00 � 571.38
> 487.67 � �18.23 � �7.60 � 0.42 100.05
>
> avg-cpu: �%user � %nice %system %iowait �%steal � %idle
> � � � � � 2.38 � �0.00 � �1.12 � 19.00 � �0.00 � 77.50
>
> Device: � � � � rrqm/s � wrqm/s � r/s � w/s � �rMB/s � �wMB/s avgrq-sz
> avgqu-sz � await �svctm �%util
> sdb � � � � � � � 0.00 � � 0.00 �0.00 3231.00 � � 0.00 � 771.19
> 488.82 � �18.29 � �5.66 � 0.31 100.05
>
> avg-cpu: �%user � %nice %system %iowait �%steal � %idle
> � � � � � 2.19 � �0.00 � �1.06 � 19.36 � �0.00 � 77.39
>
> Device: � � � � rrqm/s � wrqm/s � r/s � w/s � �rMB/s � �wMB/s avgrq-sz
> avgqu-sz � await �svctm �%util
> sdb � � � � � � � 0.00 � � 0.00 �0.00 3212.00 � � 0.00 � 769.94
> 490.92 � �18.69 � �5.82 � 0.31 100.05
>
> avg-cpu: �%user � %nice %system %iowait �%steal � %idle
> � � � � � 1.50 � �0.00 � �0.69 � 20.06 � �0.00 � 77.75
>
> Device: � � � � rrqm/s � wrqm/s � r/s � w/s � �rMB/s � �wMB/s avgrq-sz
> avgqu-sz � await �svctm �%util
> sdb � � � � � � � 0.00 � � 0.00 �0.00 1937.50 � � 0.00 � 466.06
> 492.64 � �19.18 � �9.91 � 0.52 100.05
>
> avg-cpu: �%user � %nice %system %iowait �%steal � %idle
> � � � � � 2.06 � �0.00 � �0.94 � 19.94 � �0.00 � 77.06
>
> Device: � � � � rrqm/s � wrqm/s � r/s � w/s � �rMB/s � �wMB/s avgrq-sz
> avgqu-sz � await �svctm �%util
> sdb � � � � � � � 0.00 � � 0.00 �0.00 2789.00 � � 0.00 � 668.50
> 490.89 � �18.72 � �6.71 � 0.36 100.05
>
>
>
>
> Here is my Raid controller config:
>
> this is sdb:
>
> DISK GROUP: 0
> Number of Spans: 1
> SPAN: 0
> Span Reference: 0x00
> Number of PDs: 1
> Number of VDs: 1
> Number of dedicated Hotspares: 0
> Virtual Drive Information:
> Virtual Drive: 0 (Target Id: 0)
> Name � � � � � � � �:
> RAID Level � � � � �: Primary-0, Secondary-0, RAID Level Qualifier-0
> Size � � � � � � � �: 1.818 TB
> State � � � � � � � : Optimal
> Stripe Size � � � � : 64 KB
> Number Of Drives � �: 1
> Span Depth � � � � �: 1
> Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
> if Bad BBU
> Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
> if Bad BBU
> Access Policy � � � : Read/Write
> Disk Cache Policy � : Disk's Default
> Encryption Type � � : None
> Physical Disk Information:
> Physical Disk: 0
> Enclosure Device ID: 16
> Slot Number: 0
> Device Id: 31
> Sequence Number: 4
> Media Error Count: 0
> Other Error Count: 0
> Predictive Failure Count: 0
> Last Predictive Failure Event Seq Number: 0
> PD Type: SATA
> Raw Size: 1.819 TB [0xe8e088b0 Sectors]
> Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
> Coerced Size: 1.818 TB [0xe8d00000 Sectors]
> Firmware state: Online, Spun Up
> SAS Address(0): 0x5a4badb20bf57f88
> Connected Port Number: 4(path0)
> Inquiry Data: � � �WD-WCAVY1070166WDC WD2002FYPS-18U1B0
> � 05.05G07
> FDE Capable: Not Capable
> FDE Enable: Disable
> Secured: Unsecured
> Locked: Unlocked
> Needs EKM Attention: No
> Foreign State: None
> Device Speed: Unknown
> Link Speed: Unknown
> Media Type: Hard Disk Device
>
> this is sdc :
>
> DISK GROUP: 1
> Number of Spans: 1
> SPAN: 0
> Span Reference: 0x01
> Number of PDs: 1
> Number of VDs: 1
> Number of dedicated Hotspares: 0
> Virtual Drive Information:
> Virtual Drive: 0 (Target Id: 1)
> Name � � � � � � � �:
> RAID Level � � � � �: Primary-0, Secondary-0, RAID Level Qualifier-0
> Size � � � � � � � �: 1.818 TB
> State � � � � � � � : Optimal
> Stripe Size � � � � : 64 KB
> Number Of Drives � �: 1
> Span Depth � � � � �: 1
> Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
> if Bad BBU
> Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache
> if Bad BBU
> Access Policy � � � : Read/Write
> Disk Cache Policy � : Disk's Default
> Encryption Type � � : None
> Physical Disk Information:
> Physical Disk: 0
> Enclosure Device ID: 16
> Slot Number: 1
> Device Id: 28
> Sequence Number: 4
> Media Error Count: 0
> Other Error Count: 0
> Predictive Failure Count: 0
> Last Predictive Failure Event Seq Number: 0
> PD Type: SATA
> Raw Size: 1.819 TB [0xe8e088b0 Sectors]
> Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
> Coerced Size: 1.818 TB [0xe8d00000 Sectors]
> Firmware state: Online, Spun Up
> SAS Address(0): 0x5a4badb20bf57f87
> Connected Port Number: 4(path0)
> Inquiry Data: � � �WD-WCAVY1090906WDC WD2002FYPS-18U1B0
> � 05.05G07
> FDE Capable: Not Capable
> FDE Enable: Disable
> Secured: Unsecured
> Locked: Unlocked
> Needs EKM Attention: No
> Foreign State: None
> Device Speed: Unknown
> Link Speed: Unknown
> Media Type: Hard Disk Device
>
> .......................................................
>
>
>
>
> Best Regards
> HOmer Li
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at �http://vger.kernel.org/majordomo-info.html




--
- Kyle

O: +1.415.341.3430
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Single HDD , 700MB/s ??
       [not found]   ` <CAG1+85FPHgWje5XFFkj3NAQh53c4887749U3E+mcAFshx_arww@mail.gmail.com>
@ 2012-06-29  8:09     ` Homer Li
  0 siblings, 0 replies; 6+ messages in thread
From: Homer Li @ 2012-06-29  8:09 UTC (permalink / raw)
  To: fio

Hi Ladies and Gentlemen;

    I enabled debug variable, but I can 't understand these messages.

    BTW,add some hardware information:
    # cat /proc/meminfo  | grep MemTotal
       MemTotal:     24674740 kB
    # /opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -a0 | grep "Memory Size"
       Memory Size      : 256MB

Someone remind me maybe it is the burst rate. he suggest me add
ramp_time. I think ramp_time is not important, the key is wait the
cache be filled.

I enabled raid controller cache, modified iodepth, and add ramp_time,
and wait benchmark was over. I hope it is the burst rate. but the
"crazy" speed go the last second.


Benchmark Detail:

[root@nas-34-1 ~]# cat /tmp/fio2.cfg (the attachment is this benchmark)
[global]
rw=write
ioengine=sync
size=100g
numjobs=8
bssplit=128k/20:256k/20:512k/20:1024k/40
nrfiles=2
iodepth=1
lockmem=2g
direct=1
ramp_time=20
thread

[sdb]
filename=/dev/sdb

#fio --debug=io /tmp/fio2.cfg | tee /tmp/debug
........
Run status group 0 (all jobs):
  WRITE: io=803451MB, aggrb=796952KB/s, minb=99591KB/s,
maxb=99653KB/s, mint=1032187msec, maxt=1032350msec
Disk stats (read/write):
  sdb: ios=0/3419213, merge=0/7, ticks=0/19600644, in_queue=19599902,
util=100.00%

then , I imporved iodepth to 512.
Run status group 0 (all jobs):
  WRITE: io=350805MB, aggrb=320481KB/s, minb=39357KB/s,
maxb=41759KB/s, mint=1120688msec, maxt=1120889msec

Disk stats (read/write):
  sdb: ios=0/1508802, merge=0/170, ticks=0/21431737,
in_queue=21433151, util=100.00%


debug log:
fio: set debug option io
io       27106 load ioengine sync
sdb: (g=0): rw=write, bs=128K-1M/128K-1M, ioengine=sync, iodepth=1
io       27106 load ioengine sync
io       27106 load ioengine sync
io       27106 load ioengine sync
io       27106 load ioengine sync
io       27106 load ioengine sync
io       27106 load ioengine sync
...
io       27106 load ioengine sync
sdb: (g=0): rw=write, bs=128K-1M/128K-1M, ioengine=sync, iodepth=1
fio 2.0.7
Starting 8 threads
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 fill_io_u: io_u 0x2aab2c101910: off=0/len=524288/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101910: off=0/len=524288/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101910)=0
io       27106 queue: io_u 0x2aab2c101910: off=0/len=524288/ddir=1//dev/sdb
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 fill_io_u: io_u 0x116eed00: off=0/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eed00: off=0/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x116eed00)=0
io       27106 queue: io_u 0x116eed00: off=0/len=131072/ddir=1//dev/sdb
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 fill_io_u: io_u 0x116ef010: off=0/len=524288/ddir=1//dev/sdb
io       27106 prep: io_u 0x116ef010: off=0/len=524288/ddir=1//dev/sdb
io       27106 ->prep(0x116ef010)=0
io       27106 queue: io_u 0x116ef010: off=0/len=524288/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab2c101910:
off=0/len=524288/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab2c101910:
off=524288/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101910: off=524288/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101910)=0
io       27106 queue: io_u 0x2aab2c101910: off=524288/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116eed00: off=0/len=131072/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116eed00:
off=131072/len=1048576/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eed00: off=131072/len=1048576/ddir=1//dev/sdb
io       27106 ->prep(0x116eed00)=0
io       27106 queue: io_u 0x116eed00: off=131072/len=1048576/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116ef010: off=0/len=524288/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116ef010: off=524288/len=262144/ddir=1//dev/sdb
io       27106 prep: io_u 0x116ef010: off=524288/len=262144/ddir=1//dev/sdb
io       27106 ->prep(0x116ef010)=0
io       27106 queue: io_u 0x116ef010: off=524288/len=262144/ddir=1//dev/sdb
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       io       27106 27106 fill_io_u: io_u 0x2aab38000900:
off=0/len=262144/ddir=1fill_io_u: io_u 0x2aab30101ba0:
off=0/len=1048576/ddir=1//dev/sdb//dev/sdb

io       27106 io       io       fill_io_u: io_u 0x2aab30101910:
off=0/len=1048576/ddir=127106 27106 //dev/sdb
io       prep: io_u 0x2aab30101ba0: off=0/len=1048576/ddir=127106
prep: io_u 0x2aab38000900: off=0/len=262144/ddir=1//dev/sdb
//dev/sdb
prep: io_u 0x2aab30101910: off=0/len=1048576/ddir=1//dev/sdb
io       io       27106 27106 io       ->prep(0x2aab38000900)=0
->prep(0x2aab30101ba0)=0
27106 ->prep(0x2aab30101910)=0
io       27106 queue: io_u 0x2aab38000900: off=0/len=262144/ddir=1//dev/sdb
io       27106 queue: io_u 0x2aab30101ba0: off=0/len=1048576/ddir=1//dev/sdb
io       27106 queue: io_u 0x2aab30101910: off=0/len=1048576/ddir=1//dev/sdb
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 fill_io_u: io_u 0x116eea30: off=0/len=262144/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eea30: off=0/len=262144/ddir=1//dev/sdb
io       27106 ->prep(0x116eea30)=0
io       27106 queue: io_u 0x116eea30: off=0/len=262144/ddir=1//dev/sdb
io       27106 invalidate cache /dev/sdb: 0/107374182400
io       27106 fill_io_u: io_u 0x2aab2c101ba0: off=0/len=262144/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101ba0: off=0/len=262144/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101ba0)=0
io       27106 queue: io_u 0x2aab2c101ba0: off=0/len=262144/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab2c101910:
off=524288/len=131072/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab2c101910:
off=655360/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101910: off=655360/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101910)=0
io       27106 queue: io_u 0x2aab2c101910: off=655360/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116eed00:
off=131072/len=1048576/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116eed00:
off=1179648/len=262144/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eed00: off=1179648/len=262144/ddir=1//dev/sdb
io       27106 ->prep(0x116eed00)=0
io       27106 queue: io_u 0x116eed00: off=1179648/len=262144/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116ef010:
off=524288/len=262144/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116ef010: off=786432/len=524288/ddir=1//dev/sdb
io       27106 prep: io_u 0x116ef010: off=786432/len=524288/ddir=1//dev/sdb
io       27106 ->prep(0x116ef010)=0
io       27106 queue: io_u 0x116ef010: off=786432/len=524288/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab30101910:
off=0/len=1048576/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab30101910:
off=1048576/len=262144/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab30101910: off=1048576/len=262144/ddir=1//dev/sdb
io       27106 ->prep(0x2aab30101910)=0
io       27106 queue: io_u 0x2aab30101910:
off=1048576/len=262144/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab38000900:
off=0/len=262144/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab38000900:
off=262144/len=524288/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab38000900: off=262144/len=524288/ddir=1//dev/sdb
io       27106 ->prep(0x2aab38000900)=0
io       27106 queue: io_u 0x2aab38000900: off=262144/len=524288/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab2c101910:
off=655360/len=131072/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab2c101910:
off=786432/len=1048576/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101910: off=786432/len=1048576/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101910)=0
io       27106 queue: io_u 0x2aab2c101910:
off=786432/len=1048576/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116ef010:
off=786432/len=524288/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116ef010:
off=1310720/len=1048576/ddir=1//dev/sdb
io       27106 prep: io_u 0x116ef010: off=1310720/len=1048576/ddir=1//dev/sdb
io       27106 ->prep(0x116ef010)=0
io       27106 queue: io_u 0x116ef010: off=1310720/len=1048576/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116eed00:
off=1179648/len=262144/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116eed00:
off=1441792/len=262144/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eed00: off=1441792/len=262144/ddir=1//dev/sdb
io       27106 ->prep(0x116eed00)=0
io       27106 queue: io_u 0x116eed00: off=1441792/len=262144/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab38000900:
off=262144/len=524288/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab38000900:
off=786432/len=1048576/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab38000900: off=786432/len=1048576/ddir=1//dev/sdb
io       27106 ->prep(0x2aab38000900)=0
io       27106 queue: io_u 0x2aab38000900:
off=786432/len=1048576/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab2c101910:
off=786432/len=1048576/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab2c101910:
off=1835008/len=1048576/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101910:
off=1835008/len=1048576/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101910)=0
io       27106 queue: io_u 0x2aab2c101910:
off=1835008/len=1048576/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116eed00:
off=1441792/len=262144/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116eed00:
off=1703936/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eed00: off=1703936/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x116eed00)=0
io       27106 queue: io_u 0x116eed00: off=1703936/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab38000900:
off=786432/len=1048576/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab38000900:
off=1835008/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab38000900: off=1835008/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x2aab38000900)=0
io       27106 queue: io_u 0x2aab38000900:
off=1835008/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116ef010:
off=1310720/len=1048576/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116ef010:
off=2359296/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x116ef010: off=2359296/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x116ef010)=0
io       27106 queue: io_u 0x116ef010: off=2359296/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab2c101910:
off=1835008/len=1048576/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab2c101910:
off=2883584/len=524288/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab2c101910: off=2883584/len=524288/ddir=1//dev/sdb
io       27106 ->prep(0x2aab2c101910)=0
io       27106 io complete: io_u 0x116eed00:
off=1703936/len=131072/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x116eed00:
off=1835008/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x116eed00: off=1835008/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x116eed00)=0
io       27106 queue: io_u 0x2aab2c101910:
off=2883584/len=524288/ddir=1//dev/sdb
io       27106 queue: io_u 0x116eed00: off=1835008/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x2aab38000900:
off=1835008/len=131072/ddir=1//dev/sdb
io       27106 fill_io_u: io_u 0x2aab38000900:
off=1966080/len=131072/ddir=1//dev/sdb
io       27106 prep: io_u 0x2aab38000900: off=1966080/len=131072/ddir=1//dev/sdb
io       27106 ->prep(0x2aab38000900)=0
io       27106 queue: io_u 0x2aab38000900:
off=1966080/len=131072/ddir=1//dev/sdb
io       27106 io complete: io_u 0x116ef010:
off=2359296/len=131072/ddir=1//dev/sdb


--
Homer Li
ShenZhen GuangDong Province

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-06-29  8:09 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-27  2:56 Single HDD , 700MB/s ?? Homer Li
2012-06-27 17:02 ` Martin Steigerwald
2012-06-28  8:15   ` Homer Li
2012-06-28 13:18     ` Martin Steigerwald
2012-06-28 18:11 ` Kyle Hailey
     [not found] ` <CACiQ3FACDJxKwvzmtQfvaj=iz6dvJe8bbCGLoCk4HHHpCZPyDg@mail.gmail.com>
     [not found]   ` <CAG1+85FPHgWje5XFFkj3NAQh53c4887749U3E+mcAFshx_arww@mail.gmail.com>
2012-06-29  8:09     ` Homer Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.