All of lore.kernel.org
 help / color / mirror / Atom feed
* odd result with iodepth
@ 2012-03-29 17:10 Chuck Tuffli
  2012-03-29 17:38 ` Jeff Moyer
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Chuck Tuffli @ 2012-03-29 17:10 UTC (permalink / raw)
  To: fio

I've been testing disk performance using fio, and am seeing results I
can't explain. This may not necessarily be a problem with fio, or
perhaps I'm not using the some of the parameters correctly. The
testing uses the latest fio.git on a RHEL system running a 2.6.32
kernel.

The fio test is:

fio --name=global --ioengine=libaio --rw=read --bs=512 --size=500m
--iodepth=32 --direct=1 --filename=/dev/sdb --name=job1

The IOPs reported in this case appears to correlate with an external analyzer.

If I change the iodepth to 64, fio reports double the IOPs reported in
the previous case, but the performance measured by the analyzer is the
same as in the iodepth=32 case (i.e. half what fio reports). I tried
the iodepth=64 case with another disk performance tool, and it is
reporting the same results as the external analyzer.

Interestingly, using iodepth=32 and adding a second job (--name=job2)
seems to have the same effect as iodepth=64 (reported performance is
2x compared to the external analyzer).

Am I doing something obviously wrong? What else can/should I check? TIA

---chuck

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: odd result with iodepth
  2012-03-29 17:10 odd result with iodepth Chuck Tuffli
@ 2012-03-29 17:38 ` Jeff Moyer
       [not found] ` <CANWz5fhJWZzg21aXRbHiNwTBMOiTEqDmGYHD0iTybQPTWTaw8w@mail.gmail.com>
  2012-03-30  2:15 ` Zhu Yanhai
  2 siblings, 0 replies; 7+ messages in thread
From: Jeff Moyer @ 2012-03-29 17:38 UTC (permalink / raw)
  To: Chuck Tuffli; +Cc: fio

Chuck Tuffli <ctuffli@gmail.com> writes:

> I've been testing disk performance using fio, and am seeing results I
> can't explain. This may not necessarily be a problem with fio, or
> perhaps I'm not using the some of the parameters correctly. The
> testing uses the latest fio.git on a RHEL system running a 2.6.32
> kernel.
>
> The fio test is:
>
> fio --name=global --ioengine=libaio --rw=read --bs=512 --size=500m
> --iodepth=32 --direct=1 --filename=/dev/sdb --name=job1
>
> The IOPs reported in this case appears to correlate with an external analyzer.
>
> If I change the iodepth to 64, fio reports double the IOPs reported in
> the previous case, but the performance measured by the analyzer is the
> same as in the iodepth=32 case (i.e. half what fio reports). I tried
> the iodepth=64 case with another disk performance tool, and it is
> reporting the same results as the external analyzer.
>
> Interestingly, using iodepth=32 and adding a second job (--name=job2)
> seems to have the same effect as iodepth=64 (reported performance is
> 2x compared to the external analyzer).
>
> Am I doing something obviously wrong? What else can/should I check? TIA

What's the maximum queue depth for the sd device?  I'll bet it's 32...

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: odd result with iodepth
       [not found] ` <CANWz5fhJWZzg21aXRbHiNwTBMOiTEqDmGYHD0iTybQPTWTaw8w@mail.gmail.com>
@ 2012-03-29 18:27   ` Chuck Tuffli
  2012-03-29 18:30     ` Jeff Moyer
  0 siblings, 1 reply; 7+ messages in thread
From: Chuck Tuffli @ 2012-03-29 18:27 UTC (permalink / raw)
  To: fio

On Thu, Mar 29, 2012 at 10:14 AM, Chris Worley <worleys@gmail.com> wrote:
> Sequential merging? �Try "randread" instead.

Great suggestion. Doing some additional analysis, it does appear that
IO's are probably getting merged as I see individual IO requests on
the wire of 2-4KB.

Googling around this area seems to suggest this kernel behavior isn't
something I can disable. So if I honest to goodness *only* want 512B
IO, is ioengine=sg the best route?

---chuck

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: odd result with iodepth
  2012-03-29 18:27   ` Chuck Tuffli
@ 2012-03-29 18:30     ` Jeff Moyer
  2012-03-29 18:49       ` Jeff Moyer
  2012-03-30 15:41       ` Chuck Tuffli
  0 siblings, 2 replies; 7+ messages in thread
From: Jeff Moyer @ 2012-03-29 18:30 UTC (permalink / raw)
  To: Chuck Tuffli; +Cc: fio

Chuck Tuffli <ctuffli@gmail.com> writes:

> On Thu, Mar 29, 2012 at 10:14 AM, Chris Worley <worleys@gmail.com> wrote:
>> Sequential merging?  Try "randread" instead.
>
> Great suggestion. Doing some additional analysis, it does appear that
> IO's are probably getting merged as I see individual IO requests on
> the wire of 2-4KB.
>
> Googling around this area seems to suggest this kernel behavior isn't
> something I can disable. So if I honest to goodness *only* want 512B
> IO, is ioengine=sg the best route?

Try setting /sys/block/sdX/queue/nomerges to 2.  That should completely
disable merging.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: odd result with iodepth
  2012-03-29 18:30     ` Jeff Moyer
@ 2012-03-29 18:49       ` Jeff Moyer
  2012-03-30 15:41       ` Chuck Tuffli
  1 sibling, 0 replies; 7+ messages in thread
From: Jeff Moyer @ 2012-03-29 18:49 UTC (permalink / raw)
  To: Chuck Tuffli; +Cc: fio

Jeff Moyer <jmoyer@redhat.com> writes:

> Chuck Tuffli <ctuffli@gmail.com> writes:
>
>> On Thu, Mar 29, 2012 at 10:14 AM, Chris Worley <worleys@gmail.com> wrote:
>>> Sequential merging?  Try "randread" instead.
>>
>> Great suggestion. Doing some additional analysis, it does appear that
>> IO's are probably getting merged as I see individual IO requests on
>> the wire of 2-4KB.
>>
>> Googling around this area seems to suggest this kernel behavior isn't
>> something I can disable. So if I honest to goodness *only* want 512B
>> IO, is ioengine=sg the best route?
>
> Try setting /sys/block/sdX/queue/nomerges to 2.  That should completely
> disable merging.

Oh wait, you said you're using a RHEL 2.6.32 based kernel, right?  That
isn't implemented there.  You'll have to backport this patch:

commit 488991e28e55b4fbca8067edf0259f69d1a6f92c
Author: Alan D. Brunelle <Alan.Brunelle@hp.com>
Date:   Fri Jan 29 09:04:08 2010 +0100

    block: Added in stricter no merge semantics for block I/O

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: odd result with iodepth
  2012-03-29 17:10 odd result with iodepth Chuck Tuffli
  2012-03-29 17:38 ` Jeff Moyer
       [not found] ` <CANWz5fhJWZzg21aXRbHiNwTBMOiTEqDmGYHD0iTybQPTWTaw8w@mail.gmail.com>
@ 2012-03-30  2:15 ` Zhu Yanhai
  2 siblings, 0 replies; 7+ messages in thread
From: Zhu Yanhai @ 2012-03-30  2:15 UTC (permalink / raw)
  To: Chuck Tuffli; +Cc: fio

2012/3/30 Chuck Tuffli <ctuffli@gmail.com>:
> I've been testing disk performance using fio, and am seeing results I
> can't explain. This may not necessarily be a problem with fio, or
> perhaps I'm not using the some of the parameters correctly. The
> testing uses the latest fio.git on a RHEL system running a 2.6.32
> kernel.
>
> The fio test is:
>
> fio --name=global --ioengine=libaio --rw=read --bs=512 --size=500m
> --iodepth=32 --direct=1 --filename=/dev/sdb --name=job1
>
> The IOPs reported in this case appears to correlate with an external analyzer.
>
> If I change the iodepth to 64, fio reports double the IOPs reported in
> the previous case, but the performance measured by the analyzer is the
> same as in the iodepth=32 case (i.e. half what fio reports). I tried
> the iodepth=64 case with another disk performance tool, and it is
> reporting the same results as the external analyzer.
>
> Interestingly, using iodepth=32 and adding a second job (--name=job2)
> seems to have the same effect as iodepth=64 (reported performance is
> 2x compared to the external analyzer).
>
> Am I doing something obviously wrong? What else can/should I check? TIA
>
> ---chuck
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Can you see the same result if using  --rw=randread?

--
Regards,
Zhu Yanhai

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: odd result with iodepth
  2012-03-29 18:30     ` Jeff Moyer
  2012-03-29 18:49       ` Jeff Moyer
@ 2012-03-30 15:41       ` Chuck Tuffli
  1 sibling, 0 replies; 7+ messages in thread
From: Chuck Tuffli @ 2012-03-30 15:41 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: fio

On Thu, Mar 29, 2012 at 11:30 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Chuck Tuffli <ctuffli@gmail.com> writes:
>
>> On Thu, Mar 29, 2012 at 10:14 AM, Chris Worley <worleys@gmail.com> wrote:
>>> Sequential merging? �Try "randread" instead.
>>
>> Great suggestion. Doing some additional analysis, it does appear that
>> IO's are probably getting merged as I see individual IO requests on
>> the wire of 2-4KB.
>>
>> Googling around this area seems to suggest this kernel behavior isn't
>> something I can disable. So if I honest to goodness *only* want 512B
>> IO, is ioengine=sg the best route?
>
> Try setting /sys/block/sdX/queue/nomerges to 2. �That should completely
> disable merging.

Jeff -

I upgrade to a 3.0 series kernel and set nomerges to 2 and now all the
tools agree on the IOPs number. Thanks!

---chuck

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-03-30 15:41 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-29 17:10 odd result with iodepth Chuck Tuffli
2012-03-29 17:38 ` Jeff Moyer
     [not found] ` <CANWz5fhJWZzg21aXRbHiNwTBMOiTEqDmGYHD0iTybQPTWTaw8w@mail.gmail.com>
2012-03-29 18:27   ` Chuck Tuffli
2012-03-29 18:30     ` Jeff Moyer
2012-03-29 18:49       ` Jeff Moyer
2012-03-30 15:41       ` Chuck Tuffli
2012-03-30  2:15 ` Zhu Yanhai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.