All of lore.kernel.org
 help / color / mirror / Atom feed
* fio accessing boot drive
@ 2011-05-05  1:12 Vikram Seth
  2011-05-05  4:39 ` Josh Aune
  0 siblings, 1 reply; 4+ messages in thread
From: Vikram Seth @ 2011-05-05  1:12 UTC (permalink / raw)
  To: fio

[-- Attachment #1: Type: text/plain, Size: 805 bytes --]

Hi,

I started using fio recently for raw disk performance testing.
I noticed at times that while I am running the attached test file on
/dev/sdb, which is a raid 6 volume, there is lot of activity on the
boot disk /dev/sda too.

Here's excerpt from iostat output :

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  7300.20    0.00   73.60     0.00    27.16
755.76   143.93 1950.19  13.59 100.02
sdb               0.00     0.00    0.00  210.00     0.00     0.82
8.00    10.41   49.34   4.76 100.02

I'll appreciate any insight into what can be causing these kind of
accesses on the boot drive?
Can these accesses affect overall performance results on the actual test drive?

Thanks,
Vikram.

PS: I am using fio version 1.50

[-- Attachment #2: fio_test.txt --]
[-- Type: text/plain, Size: 1293 bytes --]

[global]
description=Standard Performance test
runtime=120
direct=1
filename=/dev/sdb
numjobs=20
group_reporting

[seqread_4]
rw=read
bs=4k

[seqread_8]
stonewall
rw=read
bs=8k

[seqread_16]
stonewall
rw=read
bs=16k

[seqread_32]
stonewall
rw=read
bs=32k

[seqread_64]
stonewall
rw=read
bs=64k

[seqread_128]
stonewall
rw=read
bs=128k

[seqread_256]
stonewall
rw=read
bs=256k

[seqwrite_4]
stonewall
rw=write
bs=4k

[seqwrite_8]
stonewall
rw=write
bs=8k

[seqwrite_16]
stonewall
rw=write
bs=16k

[seqwrite_32]
stonewall
rw=write
bs=32k

[seqwrite_64]
stonewall
rw=write
bs=64k

[seqwrite_128]
stonewall
rw=write
bs=128k

[seqwrite_256]
stonewall
rw=write
bs=256k

[randread_4]
stonewall
rw=randread
bs=4k

[randread_8]
stonewall
rw=randread
bs=8k

[randread_16]
stonewall
rw=randread
bs=16k

[randread_32]
stonewall
rw=randread
bs=32k

[randread_64]
stonewall
rw=randread
bs=64k

[randread_128]
stonewall
rw=randread
bs=128k

[randread_256]
stonewall
rw=randread
bs=256k

[randwrite_4]
stonewall
rw=randwrite
bs=4k

[randwrite_8]
stonewall
rw=randwrite
bs=8k

[randwrite_16]
stonewall
rw=randwrite
bs=16k

[randwrite_32]
stonewall
rw=randwrite
bs=32k

[randwrite_64]
stonewall
rw=randwrite
bs=64k

[randwrite_128]
stonewall
rw=randwrite
bs=128k

[randwrite_256]
stonewall
rw=randwrite
bs=256k

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: fio accessing boot drive
  2011-05-05  1:12 fio accessing boot drive Vikram Seth
@ 2011-05-05  4:39 ` Josh Aune
  2011-05-19  1:50   ` Vikram Seth
  0 siblings, 1 reply; 4+ messages in thread
From: Josh Aune @ 2011-05-05  4:39 UTC (permalink / raw)
  To: Vikram Seth; +Cc: fio

On Wednesday, May 4, 2011, Vikram Seth <seth.vik@gmail.com> wrote:
> Hi,
>
> I started using fio recently for raw disk performance testing.
> I noticed at times that while I am running the attached test file on
> /dev/sdb, which is a raid 6 volume, there is lot of activity on the
> boot disk /dev/sda too.

Check for files in /tmp. To not generate them, disable the random map.

Josh

>
> Here's excerpt from iostat output :
>
> Device: � � � � rrqm/s � wrqm/s � � r/s � � w/s � �rMB/s � �wMB/s
> avgrq-sz avgqu-sz � await �svctm �%util
> sda � � � � � � � 0.00 �7300.20 � �0.00 � 73.60 � � 0.00 � �27.16
> 755.76 � 143.93 1950.19 �13.59 100.02
> sdb � � � � � � � 0.00 � � 0.00 � �0.00 �210.00 � � 0.00 � � 0.82
> 8.00 � �10.41 � 49.34 � 4.76 100.02
>
> I'll appreciate any insight into what can be causing these kind of
> accesses on the boot drive?
> Can these accesses affect overall performance results on the actual test drive?
>
> Thanks,
> Vikram.
>
> PS: I am using fio version 1.50
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: fio accessing boot drive
  2011-05-05  4:39 ` Josh Aune
@ 2011-05-19  1:50   ` Vikram Seth
  2011-05-19  7:58     ` Jens Axboe
  0 siblings, 1 reply; 4+ messages in thread
From: Vikram Seth @ 2011-05-19  1:50 UTC (permalink / raw)
  To: Josh Aune; +Cc: fio

Hi Josh,

Thanks for the response.

Do you mean use 'norandommap' option ?
If I disable random map then won't I lose coverage on random i/o testing?

Vikram.

On Wed, May 4, 2011 at 9:39 PM, Josh Aune <luken@omner.org> wrote:
> On Wednesday, May 4, 2011, Vikram Seth <seth.vik@gmail.com> wrote:
>> Hi,
>>
>> I started using fio recently for raw disk performance testing.
>> I noticed at times that while I am running the attached test file on
>> /dev/sdb, which is a raid 6 volume, there is lot of activity on the
>> boot disk /dev/sda too.
>
> Check for files in /tmp. To not generate them, disable the random map.
>
> Josh
>
>>
>> Here's excerpt from iostat output :
>>
>> Device: � � � � rrqm/s � wrqm/s � � r/s � � w/s � �rMB/s � �wMB/s
>> avgrq-sz avgqu-sz � await �svctm �%util
>> sda � � � � � � � 0.00 �7300.20 � �0.00 � 73.60 � � 0.00 � �27.16
>> 755.76 � 143.93 1950.19 �13.59 100.02
>> sdb � � � � � � � 0.00 � � 0.00 � �0.00 �210.00 � � 0.00 � � 0.82
>> 8.00 � �10.41 � 49.34 � 4.76 100.02
>>
>> I'll appreciate any insight into what can be causing these kind of
>> accesses on the boot drive?
>> Can these accesses affect overall performance results on the actual test drive?
>>
>> Thanks,
>> Vikram.
>>
>> PS: I am using fio version 1.50
>>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: fio accessing boot drive
  2011-05-19  1:50   ` Vikram Seth
@ 2011-05-19  7:58     ` Jens Axboe
  0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2011-05-19  7:58 UTC (permalink / raw)
  To: Vikram Seth; +Cc: Josh Aune, fio

On 2011-05-19 03:50, Vikram Seth wrote:
> Hi Josh,
> 
> Thanks for the response.
> 
> Do you mean use 'norandommap' option ?
> If I disable random map then won't I lose coverage on random i/o testing?

You will. With the random map turned out, you are guarenteed to hit
every block and only once. Without it, you are at the mercy of the
random number generator. But it is of high quality, so coverage will
still be excellent and it'll likely be faster (and retain randomness at
the end of the run, the random map tends to give up at the very end).

Unless you are absolutely needing the coverage guarentee, I would just
turn it off if it's a problem. Alternatively, make /tmp a tmpfs or
similar so updates to the maps don't end up touching disk.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-05-19  7:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-05  1:12 fio accessing boot drive Vikram Seth
2011-05-05  4:39 ` Josh Aune
2011-05-19  1:50   ` Vikram Seth
2011-05-19  7:58     ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.