All of lore.kernel.org
 help / color / mirror / Atom feed
* [Question] How to perform stride access?
@ 2014-09-23 13:47 Akira Hayakawa
  2014-09-23 14:05 ` Andrey Kuzmin
  0 siblings, 1 reply; 19+ messages in thread
From: Akira Hayakawa @ 2014-09-23 13:47 UTC (permalink / raw)
  To: fio

Hi,

I want to perform stride write access to a block device but
I don't have a clue how I can do that.

What I want to do is to perform a stride access that
each write size is 1 sector and 7 sectors are apart between each writes.
(i.e. Only the first sector of each 4KB block)

For example,
0, 8, 16, 24, 32, ...

And, it repeat over the device until certain amount of writes are accomplished.
In my case, amount of 32MB to 508KB device.

I consider the command below works like as I want but it doesn't actually.
Instead, it looks performing ordinary 512KB seq write.
fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512

My questions are:
1) How to perform stride write access in fio?
2) If fio is not a appropriate tool for this purpose, easy to fix?
   Or do you recommend other tool?

- Akira

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-23 13:47 [Question] How to perform stride access? Akira Hayakawa
@ 2014-09-23 14:05 ` Andrey Kuzmin
  2014-09-24  8:35   ` Akira Hayakawa
  0 siblings, 1 reply; 19+ messages in thread
From: Andrey Kuzmin @ 2014-09-23 14:05 UTC (permalink / raw)
  To: Akira Hayakawa; +Cc: fio

Offset modifier under rw= should do the trick, consult
https://github.com/axboe/fio/blob/master/HOWTO for details.

Best regards,
Andrey
Regards,
Andrey


On Tue, Sep 23, 2014 at 5:47 PM, Akira Hayakawa <ruby.wktk@gmail.com> wrote:
> Hi,
>
> I want to perform stride write access to a block device but
> I don't have a clue how I can do that.
>
> What I want to do is to perform a stride access that
> each write size is 1 sector and 7 sectors are apart between each writes.
> (i.e. Only the first sector of each 4KB block)
>
> For example,
> 0, 8, 16, 24, 32, ...
>
> And, it repeat over the device until certain amount of writes are accomplished.
> In my case, amount of 32MB to 508KB device.
>
> I consider the command below works like as I want but it doesn't actually.
> Instead, it looks performing ordinary 512KB seq write.
> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>
> My questions are:
> 1) How to perform stride write access in fio?
> 2) If fio is not a appropriate tool for this purpose, easy to fix?
>    Or do you recommend other tool?
>
> - Akira
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-23 14:05 ` Andrey Kuzmin
@ 2014-09-24  8:35   ` Akira Hayakawa
       [not found]     ` <CANvN+emPbk+MwwNoABs-rdWdJbn+JD+O0GVAGftR4w7mNVndcg@mail.gmail.com>
  2014-09-24  9:52     ` Sitsofe Wheeler
  0 siblings, 2 replies; 19+ messages in thread
From: Akira Hayakawa @ 2014-09-24  8:35 UTC (permalink / raw)
  To: andrey.v.kuzmin; +Cc: fio

Thanks Andrey,

However, I don't think I still have a problem.

I modified the command

From:
>> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
To:
fio --name=test --filename=#{dev.path} --rw=write:4k --ioengine=libaio --direct=1 --io_limit=32M --bs=512

The result is the runtime is too short.
I guess fio stops as soon as it reaches the end of the device.
However, I want it to repeat over and over again until io_limit is fully consumed.

Note that the device is smaller than 32M (it is only 508B).
So, it should repeat more than 60 times.

How can I repeat the workload?

Or,

Building hand-made random map would suffice, I guess.

- Akira


On 9/23/14 11:05 PM, Andrey Kuzmin wrote:
> Offset modifier under rw= should do the trick, consult
> https://github.com/axboe/fio/blob/master/HOWTO for details.
> 
> Best regards,
> Andrey
> Regards,
> Andrey
> 
> 
> On Tue, Sep 23, 2014 at 5:47 PM, Akira Hayakawa <ruby.wktk@gmail.com> wrote:
>> Hi,
>>
>> I want to perform stride write access to a block device but
>> I don't have a clue how I can do that.
>>
>> What I want to do is to perform a stride access that
>> each write size is 1 sector and 7 sectors are apart between each writes.
>> (i.e. Only the first sector of each 4KB block)
>>
>> For example,
>> 0, 8, 16, 24, 32, ...
>>
>> And, it repeat over the device until certain amount of writes are accomplished.
>> In my case, amount of 32MB to 508KB device.
>>
>> I consider the command below works like as I want but it doesn't actually.
>> Instead, it looks performing ordinary 512KB seq write.
>> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>>
>> My questions are:
>> 1) How to perform stride write access in fio?
>> 2) If fio is not a appropriate tool for this purpose, easy to fix?
>>    Or do you recommend other tool?
>>
>> - Akira
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
       [not found]     ` <CANvN+emPbk+MwwNoABs-rdWdJbn+JD+O0GVAGftR4w7mNVndcg@mail.gmail.com>
@ 2014-09-24  9:28       ` Akira Hayakawa
  0 siblings, 0 replies; 19+ messages in thread
From: Akira Hayakawa @ 2014-09-24  9:28 UTC (permalink / raw)
  To: andrey.v.kuzmin; +Cc: fio

Andrey,

> You might want to specify iosize or use a time-based run.
Could you explain the difference between bs and iosize?
My understanding is bs option specifies the iosize.
Do you mean I need to add "--size=32M"?

time-based won't help in my case because I want to see
the runtime of 32MB writes consumed.

- Akira

On 9/24/14 6:10 PM, Andrey Kuzmin wrote:
> You might want to specify iosize or use a time-based run.
> 
> Regards,
> Andrey
> 
> On Sep 24, 2014 12:35 PM, "Akira Hayakawa" <ruby.wktk@gmail.com <mailto:ruby.wktk@gmail.com>> wrote:
>>
>> Thanks Andrey,
>>
>> However, I don't think I still have a problem.
>>
>> I modified the command
>>
>> From:
>> >> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>> To:
>> fio --name=test --filename=#{dev.path} --rw=write:4k --ioengine=libaio --direct=1 --io_limit=32M --bs=512
>>
>> The result is the runtime is too short.
>> I guess fio stops as soon as it reaches the end of the device.
>> However, I want it to repeat over and over again until io_limit is fully consumed.
>>
>> Note that the device is smaller than 32M (it is only 508B).
>> So, it should repeat more than 60 times.
>>
>> How can I repeat the workload?
>>
>> Or,
>>
>> Building hand-made random map would suffice, I guess.
>>
>> - Akira
>>
>>
>> On 9/23/14 11:05 PM, Andrey Kuzmin wrote:
>> > Offset modifier under rw= should do the trick, consult
>> > https://github.com/axboe/fio/blob/master/HOWTO for details.
>> >
>> > Best regards,
>> > Andrey
>> > Regards,
>> > Andrey
>> >
>> >
>> > On Tue, Sep 23, 2014 at 5:47 PM, Akira Hayakawa <ruby.wktk@gmail.com <mailto:ruby.wktk@gmail.com>> wrote:
>> >> Hi,
>> >>
>> >> I want to perform stride write access to a block device but
>> >> I don't have a clue how I can do that.
>> >>
>> >> What I want to do is to perform a stride access that
>> >> each write size is 1 sector and 7 sectors are apart between each writes.
>> >> (i.e. Only the first sector of each 4KB block)
>> >>
>> >> For example,
>> >> 0, 8, 16, 24, 32, ...
>> >>
>> >> And, it repeat over the device until certain amount of writes are accomplished.
>> >> In my case, amount of 32MB to 508KB device.
>> >>
>> >> I consider the command below works like as I want but it doesn't actually.
>> >> Instead, it looks performing ordinary 512KB seq write.
>> >> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>> >>
>> >> My questions are:
>> >> 1) How to perform stride write access in fio?
>> >> 2) If fio is not a appropriate tool for this purpose, easy to fix?
>> >>    Or do you recommend other tool?
>> >>
>> >> - Akira
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe fio" in
>> >> the body of a message to majordomo@vger.kernel.org <mailto:majordomo@vger.kernel.org>
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-24  8:35   ` Akira Hayakawa
       [not found]     ` <CANvN+emPbk+MwwNoABs-rdWdJbn+JD+O0GVAGftR4w7mNVndcg@mail.gmail.com>
@ 2014-09-24  9:52     ` Sitsofe Wheeler
  2014-09-24  9:58       ` Akira Hayakawa
  2014-09-24 21:22       ` Sitsofe Wheeler
  1 sibling, 2 replies; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-24  9:52 UTC (permalink / raw)
  To: Akira Hayakawa; +Cc: andrey.v.kuzmin, fio

On 24 September 2014 09:35, Akira Hayakawa <ruby.wktk@gmail.com> wrote:
>
> However, I [...] think I still have a problem.
>
> I modified the command
>
> From:
>>> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
> To:
> fio --name=test --filename=#{dev.path} --rw=write:4k --ioengine=libaio --direct=1 --io_limit=32M --bs=512
>
> The result is the runtime is too short.

This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:

dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
--io_limit=1M  --name=2M --io_limit=2M
[...]

Run status group 0 (all jobs):
  WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
mint=2msec, maxt=2msec

Run status group 1 (all jobs):
  WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
mint=2msec, maxt=2msec

Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
written each group be different? Jens?

> I guess fio stops as soon as it reaches the end of the device.
> However, I want it to repeat over and over again until io_limit is fully consumed.
>
> Note that the device is smaller than 32M (it is only 508B).

508 bytes? But your block size is 512 bytes! Am I misunderstanding
what you're doing?

> So, it should repeat more than 60 times.
>
> How can I repeat the workload?

number_ios fails too and using zonesize/zoneskip also doesn't help.
The only thing left that springs to mind is to use loops or fix this
bug :-)


> Or,
>
> Building hand-made random map would suffice, I guess.

I'm not sure I follow. The workload you gave above is sequential with
holes (--rw=write:4k) - why would we need a random map?

-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-24  9:52     ` Sitsofe Wheeler
@ 2014-09-24  9:58       ` Akira Hayakawa
  2014-09-24 21:22       ` Sitsofe Wheeler
  1 sibling, 0 replies; 19+ messages in thread
From: Akira Hayakawa @ 2014-09-24  9:58 UTC (permalink / raw)
  To: sitsofe; +Cc: andrey.v.kuzmin, fio

Hi Sistofe,

I am using fio-2.1.12-11-g8fc4

>> Note that the device is smaller than 32M (it is only 508B).
> 
> 508 bytes? But your block size is 512 bytes! Am I misunderstanding
> what you're doing?
Sorry, I should have written as 512kB. It was a mistake.

- Akira

On 9/24/14 6:52 PM, Sitsofe Wheeler wrote:
> On 24 September 2014 09:35, Akira Hayakawa <ruby.wktk@gmail.com> wrote:
>>
>> However, I [...] think I still have a problem.
>>
>> I modified the command
>>
>> From:
>>>> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>> To:
>> fio --name=test --filename=#{dev.path} --rw=write:4k --ioengine=libaio --direct=1 --io_limit=32M --bs=512
>>
>> The result is the runtime is too short.
> 
> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
> 
> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
> --io_limit=1M  --name=2M --io_limit=2M
> [...]
> 
> Run status group 0 (all jobs):
>   WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> mint=2msec, maxt=2msec
> 
> Run status group 1 (all jobs):
>   WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> mint=2msec, maxt=2msec
> 
> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
> written each group be different? Jens?
> 
>> I guess fio stops as soon as it reaches the end of the device.
>> However, I want it to repeat over and over again until io_limit is fully consumed.
>>
>> Note that the device is smaller than 32M (it is only 508B).
> 
> 508 bytes? But your block size is 512 bytes! Am I misunderstanding
> what you're doing?
> 
>> So, it should repeat more than 60 times.
>>
>> How can I repeat the workload?
> 
> number_ios fails too and using zonesize/zoneskip also doesn't help.
> The only thing left that springs to mind is to use loops or fix this
> bug :-)
> 
> 
>> Or,
>>
>> Building hand-made random map would suffice, I guess.
> 
> I'm not sure I follow. The workload you gave above is sequential with
> holes (--rw=write:4k) - why would we need a random map?
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-24  9:52     ` Sitsofe Wheeler
  2014-09-24  9:58       ` Akira Hayakawa
@ 2014-09-24 21:22       ` Sitsofe Wheeler
  2014-09-28  2:24         ` Jens Axboe
  1 sibling, 1 reply; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-24 21:22 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Akira Hayakawa, andrey.v.kuzmin, fio

(Adding Jens to the CC list)

On 24 September 2014 10:52, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
> On 24 September 2014 09:35, Akira Hayakawa <ruby.wktk@gmail.com> wrote:
>>
>> However, I [...] think I still have a problem.
>>
>> I modified the command
>>
>> From:
>>>> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>> To:
>> fio --name=test --filename=#{dev.path} --rw=write:4k --ioengine=libaio --direct=1 --io_limit=32M --bs=512
>>
>> The result is the runtime is too short.
>
> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
>
> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
> --io_limit=1M  --name=2M --io_limit=2M
> [...]
>
> Run status group 0 (all jobs):
>   WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> mint=2msec, maxt=2msec
>
> Run status group 1 (all jobs):
>   WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> mint=2msec, maxt=2msec
>
> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
> written each group be different? Jens?
>
>> I guess fio stops as soon as it reaches the end of the device.
>> However, I want it to repeat over and over again until io_limit is fully consumed.
>>
>> Note that the device is smaller than 32M (it is only 508B).
>
> 508 bytes? But your block size is 512 bytes! Am I misunderstanding
> what you're doing?
>
>> So, it should repeat more than 60 times.
>>
>> How can I repeat the workload?
>
> number_ios fails too and using zonesize/zoneskip also doesn't help.
> The only thing left that springs to mind is to use loops or fix this
> bug :-)
>
>
>> Or,
>>
>> Building hand-made random map would suffice, I guess.
>
> I'm not sure I follow. The workload you gave above is sequential with
> holes (--rw=write:4k) - why would we need a random map?

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-24 21:22       ` Sitsofe Wheeler
@ 2014-09-28  2:24         ` Jens Axboe
  2014-09-28 10:32           ` Sitsofe Wheeler
  2014-09-28 10:36           ` Sitsofe Wheeler
  0 siblings, 2 replies; 19+ messages in thread
From: Jens Axboe @ 2014-09-28  2:24 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Akira Hayakawa, andrey.v.kuzmin, fio

On 2014-09-24 15:22, Sitsofe Wheeler wrote:
> (Adding Jens to the CC list)
>
> On 24 September 2014 10:52, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>> On 24 September 2014 09:35, Akira Hayakawa <ruby.wktk@gmail.com> wrote:
>>>
>>> However, I [...] think I still have a problem.
>>>
>>> I modified the command
>>>
>>> From:
>>>>> fio --name=test --filename=#{dev.path} --rw=write --ioengine=libaio --direct=1 --io_limit=32M --size=100% --ba=4k --bs=512
>>> To:
>>> fio --name=test --filename=#{dev.path} --rw=write:4k --ioengine=libaio --direct=1 --io_limit=32M --bs=512
>>>
>>> The result is the runtime is too short.
>>
>> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
>>
>> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
>> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
>> --io_limit=1M  --name=2M --io_limit=2M
>> [...]
>>
>> Run status group 0 (all jobs):
>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>> mint=2msec, maxt=2msec
>>
>> Run status group 1 (all jobs):
>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>> mint=2msec, maxt=2msec
>>
>> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
>> written each group be different? Jens?

You are doing a sequential workload, skipping 4k every time. First write 
will be to offset 0, next to 8KB, etc. Write 128 would be to 1040384, 
which is 1MB - 8KB. Hence the next feasible offset after that would be 
1MB, which is end of the file. So how could it do more than 512KB of IO? 
That's 128 * 4KB.

I didn't read the whole thread in detail, just looked at your last 
example here. And for that one, I don't see anything wrong.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28  2:24         ` Jens Axboe
@ 2014-09-28 10:32           ` Sitsofe Wheeler
  2014-09-28 10:36           ` Sitsofe Wheeler
  1 sibling, 0 replies; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-28 10:32 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

[-- Attachment #1: Type: text/plain, Size: 2764 bytes --]

On 28 September 2014 03:24, Jens Axboe <axboe@kernel.dk> wrote:
>> On 24 September 2014 10:52, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>>>
>>> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
>>>
>>> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
>>> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
>>> --io_limit=1M  --name=2M --io_limit=2M
>>> [...]
>>>
>>> Run status group 0 (all jobs):
>>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>> mint=2msec, maxt=2msec
>>>
>>> Run status group 1 (all jobs):
>>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>> mint=2msec, maxt=2msec
>>>
>>> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
>>> written each group be different? Jens?
>
> You are doing a sequential workload, skipping 4k every time. First write
> will be to offset 0, next to 8KB, etc. Write 128 would be to 1040384,
which
> is 1MB - 8KB. Hence the next feasible offset after that would be 1MB,
which
> is end of the file. So how could it do more than 512KB of IO? That's 128 *
> 4KB.
>
> I didn't read the whole thread in detail, just looked at your last example
> here. And for that one, I don't see anything wrong.

I guess I would have thought io_limit always forced wraparound. For example:

# dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
# fio --bs=4k --filename=/dev/shm/1M --name=go1 --rw=write
[...]
Run status group 0 (all jobs):
  WRITE: io=1024KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
mint=3msec, maxt=3msec
# fio --bs=4k --filename=/dev/shm/1M --name=go2 --io_limit=2M --rw=write
Run status group 0 (all jobs):
  WRITE: io=2048KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
mint=6msec, maxt=6msec
[...]
# fio --bs=4k --filename=/dev/shm/1M --name=go3 --io_limit=2M --rw=write:4k
[...]
  WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
mint=2msec, maxt=2msec
Run status group 0 (all jobs):
# fio --bs=4k --filename=/dev/shm/1M --name=go4 --io_limit=2M --rw=write:4k
[...]
Run status group 0 (all jobs):
  WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
mint=2msec, maxt=2msec

go2 is a plain sequential job that does twice as much I/O as go1. Given
that the size of the file being written to has not changed between the runs
one could guess that fio simply wrapped around and started from the first
offset (0) to write the second MB of data. Given this isn't it a fair
assumption that when doing a skipping workload if io_limit is used (as in
go4) and an offset beyond the end of the device is produced the same
wraparound behaviour as go2 should occur and the total io done should match
that specified in io_limit?

-- 
Sitsofe | http://sucs.org/~sits/

[-- Attachment #2: Type: text/html, Size: 3429 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28  2:24         ` Jens Axboe
  2014-09-28 10:32           ` Sitsofe Wheeler
@ 2014-09-28 10:36           ` Sitsofe Wheeler
  2014-09-28 14:24             ` Jens Axboe
  1 sibling, 1 reply; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-28 10:36 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

(Resending because first mail had an HTML part)

On 28 September 2014 03:24, Jens Axboe <axboe@kernel.dk> wrote:
>> On 24 September 2014 10:52, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>>>
>>> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
>>>
>>> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
>>> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
>>> --io_limit=1M  --name=2M --io_limit=2M
>>> [...]
>>>
>>> Run status group 0 (all jobs):
>>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>> mint=2msec, maxt=2msec
>>>
>>> Run status group 1 (all jobs):
>>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>> mint=2msec, maxt=2msec
>>>
>>> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
>>> written each group be different? Jens?
>
> You are doing a sequential workload, skipping 4k every time. First write
> will be to offset 0, next to 8KB, etc. Write 128 would be to 1040384,
which
> is 1MB - 8KB. Hence the next feasible offset after that would be 1MB,
which
> is end of the file. So how could it do more than 512KB of IO? That's 128 *
> 4KB.
>
> I didn't read the whole thread in detail, just looked at your last example
> here. And for that one, I don't see anything wrong.

I guess I would have thought io_limit always forced wraparound. For example:

# dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
# fio --bs=4k --filename=/dev/shm/1M --name=go1 --rw=write
[...]
Run status group 0 (all jobs):
  WRITE: io=1024KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
mint=3msec, maxt=3msec
# fio --bs=4k --filename=/dev/shm/1M --name=go2 --io_limit=2M --rw=write
Run status group 0 (all jobs):
  WRITE: io=2048KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
mint=6msec, maxt=6msec
[...]
# fio --bs=4k --filename=/dev/shm/1M --name=go3 --io_limit=2M --rw=write:4k
[...]
  WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
mint=2msec, maxt=2msec
Run status group 0 (all jobs):
# fio --bs=4k --filename=/dev/shm/1M --name=go4 --io_limit=2M --rw=write:4k
[...]
Run status group 0 (all jobs):
  WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
mint=2msec, maxt=2msec

go2 is a plain sequential job that does twice as much I/O as go1. Given
that the size of the file being written to has not changed between the runs
one could guess that fio simply wrapped around and started from the first
offset (0) to write the second MB of data. Given this isn't it a fair
assumption that when doing a skipping workload if io_limit is used (as in
go4) and an offset beyond the end of the device is produced the same
wraparound behaviour as go2 should occur and the total io done should match
that specified in io_limit?

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28 10:36           ` Sitsofe Wheeler
@ 2014-09-28 14:24             ` Jens Axboe
  2014-09-28 15:08               ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2014-09-28 14:24 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

On 2014-09-28 04:36, Sitsofe Wheeler wrote:
> (Resending because first mail had an HTML part)
>
> On 28 September 2014 03:24, Jens Axboe <axboe@kernel.dk> wrote:
>>> On 24 September 2014 10:52, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>>>>
>>>> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
>>>>
>>>> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
>>>> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
>>>> --io_limit=1M  --name=2M --io_limit=2M
>>>> [...]
>>>>
>>>> Run status group 0 (all jobs):
>>>>     WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>>> mint=2msec, maxt=2msec
>>>>
>>>> Run status group 1 (all jobs):
>>>>     WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>>> mint=2msec, maxt=2msec
>>>>
>>>> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
>>>> written each group be different? Jens?
>>
>> You are doing a sequential workload, skipping 4k every time. First write
>> will be to offset 0, next to 8KB, etc. Write 128 would be to 1040384,
> which
>> is 1MB - 8KB. Hence the next feasible offset after that would be 1MB,
> which
>> is end of the file. So how could it do more than 512KB of IO? That's 128 *
>> 4KB.
>>
>> I didn't read the whole thread in detail, just looked at your last example
>> here. And for that one, I don't see anything wrong.
>
> I guess I would have thought io_limit always forced wraparound. For example:
>
> # dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
> # fio --bs=4k --filename=/dev/shm/1M --name=go1 --rw=write
> [...]
> Run status group 0 (all jobs):
>    WRITE: io=1024KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
> mint=3msec, maxt=3msec
> # fio --bs=4k --filename=/dev/shm/1M --name=go2 --io_limit=2M --rw=write
> Run status group 0 (all jobs):
>    WRITE: io=2048KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
> mint=6msec, maxt=6msec
> [...]
> # fio --bs=4k --filename=/dev/shm/1M --name=go3 --io_limit=2M --rw=write:4k
> [...]
>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> mint=2msec, maxt=2msec
> Run status group 0 (all jobs):
> # fio --bs=4k --filename=/dev/shm/1M --name=go4 --io_limit=2M --rw=write:4k
> [...]
> Run status group 0 (all jobs):
>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> mint=2msec, maxt=2msec
>
> go2 is a plain sequential job that does twice as much I/O as go1. Given
> that the size of the file being written to has not changed between the runs
> one could guess that fio simply wrapped around and started from the first
> offset (0) to write the second MB of data. Given this isn't it a fair
> assumption that when doing a skipping workload if io_limit is used (as in
> go4) and an offset beyond the end of the device is produced the same
> wraparound behaviour as go2 should occur and the total io done should match
> that specified in io_limit?

I would agree on that, behavior for those cases _should_ be the same. 
Without the holed IO, it closes/reopens the file and repeats the 1M 
writes. With it, it does not. I will take a look.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28 14:24             ` Jens Axboe
@ 2014-09-28 15:08               ` Jens Axboe
  2014-09-28 19:44                 ` Sitsofe Wheeler
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2014-09-28 15:08 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

[-- Attachment #1: Type: text/plain, Size: 3310 bytes --]

On 2014-09-28 08:24, Jens Axboe wrote:
> On 2014-09-28 04:36, Sitsofe Wheeler wrote:
>> (Resending because first mail had an HTML part)
>>
>> On 28 September 2014 03:24, Jens Axboe <axboe@kernel.dk> wrote:
>>>> On 24 September 2014 10:52, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>>>>>
>>>>> This looks like a bug. I can reproduce it with 2.1.11-11-gb7f5 too:
>>>>>
>>>>> dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
>>>>> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --stonewall --name=1M
>>>>> --io_limit=1M  --name=2M --io_limit=2M
>>>>> [...]
>>>>>
>>>>> Run status group 0 (all jobs):
>>>>>     WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s,
>>>>> maxb=256000KB/s,
>>>>> mint=2msec, maxt=2msec
>>>>>
>>>>> Run status group 1 (all jobs):
>>>>>     WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s,
>>>>> maxb=256000KB/s,
>>>>> mint=2msec, maxt=2msec
>>>>>
>>>>> Why isn't io 1024KB for group 0? Additionally, shouldn't the total io
>>>>> written each group be different? Jens?
>>>
>>> You are doing a sequential workload, skipping 4k every time. First write
>>> will be to offset 0, next to 8KB, etc. Write 128 would be to 1040384,
>> which
>>> is 1MB - 8KB. Hence the next feasible offset after that would be 1MB,
>> which
>>> is end of the file. So how could it do more than 512KB of IO? That's
>>> 128 *
>>> 4KB.
>>>
>>> I didn't read the whole thread in detail, just looked at your last
>>> example
>>> here. And for that one, I don't see anything wrong.
>>
>> I guess I would have thought io_limit always forced wraparound. For
>> example:
>>
>> # dd if=/dev/zero of=/dev/shm/1M bs=1M count=1
>> # fio --bs=4k --filename=/dev/shm/1M --name=go1 --rw=write
>> [...]
>> Run status group 0 (all jobs):
>>    WRITE: io=1024KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
>> mint=3msec, maxt=3msec
>> # fio --bs=4k --filename=/dev/shm/1M --name=go2 --io_limit=2M --rw=write
>> Run status group 0 (all jobs):
>>    WRITE: io=2048KB, aggrb=341333KB/s, minb=341333KB/s, maxb=341333KB/s,
>> mint=6msec, maxt=6msec
>> [...]
>> # fio --bs=4k --filename=/dev/shm/1M --name=go3 --io_limit=2M
>> --rw=write:4k
>> [...]
>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>> mint=2msec, maxt=2msec
>> Run status group 0 (all jobs):
>> # fio --bs=4k --filename=/dev/shm/1M --name=go4 --io_limit=2M
>> --rw=write:4k
>> [...]
>> Run status group 0 (all jobs):
>>    WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>> mint=2msec, maxt=2msec
>>
>> go2 is a plain sequential job that does twice as much I/O as go1. Given
>> that the size of the file being written to has not changed between the
>> runs
>> one could guess that fio simply wrapped around and started from the first
>> offset (0) to write the second MB of data. Given this isn't it a fair
>> assumption that when doing a skipping workload if io_limit is used (as in
>> go4) and an offset beyond the end of the device is produced the same
>> wraparound behaviour as go2 should occur and the total io done should
>> match
>> that specified in io_limit?
>
> I would agree on that, behavior for those cases _should_ be the same.
> Without the holed IO, it closes/reopens the file and repeats the 1M
> writes. With it, it does not. I will take a look.

Does the attached fix it up?

-- 
Jens Axboe


[-- Attachment #2: seq-reset.patch --]
[-- Type: text/x-patch, Size: 544 bytes --]

diff --git a/io_u.c b/io_u.c
index 8546899c03e7..cbe14b3f5bda 100644
--- a/io_u.c
+++ b/io_u.c
@@ -283,8 +283,15 @@ static int get_next_seq_offset(struct thread_data *td, struct fio_file *f,
 			f->last_pos = f->real_file_size;
 
 		pos = f->last_pos - f->file_offset;
-		if (pos)
+		if (pos) {
 			pos += td->o.ddir_seq_add;
+			/*
+			 * If we reach beyond the end of the file with
+			 * holed IO, wrap around to the beginning again.
+			 */
+			if (pos >= f->real_file_size)
+				pos = f->file_offset;
+		}
 
 		*offset = pos;
 		return 0;

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28 15:08               ` Jens Axboe
@ 2014-09-28 19:44                 ` Sitsofe Wheeler
  2014-09-28 22:13                   ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-28 19:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

On Sun, Sep 28, 2014 at 09:08:31AM -0600, Jens Axboe wrote:
> On 2014-09-28 08:24, Jens Axboe wrote:
> >On 2014-09-28 04:36, Sitsofe Wheeler wrote:
> >>I guess I would have thought io_limit always forced wraparound. For
> >>example:
> >>
> >># dd if=/dev/zero of=/dev/shm/1M bs=1M count=1 # fio --bs=4k
> >>--filename=/dev/shm/1M --name=go1 --rw=write [...] Run status group
> >>0 (all jobs): WRITE: io=1024KB, aggrb=341333KB/s, minb=341333KB/s,
> >>maxb=341333KB/s, mint=3msec, maxt=3msec # fio --bs=4k
> >>--filename=/dev/shm/1M --name=go2 --io_limit=2M --rw=write Run
> >>status group 0 (all jobs): WRITE: io=2048KB, aggrb=341333KB/s,
> >>minb=341333KB/s, maxb=341333KB/s, mint=6msec, maxt=6msec [...] # fio
> >>--bs=4k --filename=/dev/shm/1M --name=go3 --io_limit=2M
> >>--rw=write:4k [...] WRITE: io=512KB, aggrb=256000KB/s,
> >>minb=256000KB/s, maxb=256000KB/s, mint=2msec, maxt=2msec Run status
> >>group 0 (all jobs): # fio --bs=4k --filename=/dev/shm/1M --name=go4
> >>--io_limit=2M --rw=write:4k [...] Run status group 0 (all jobs):
> >>WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
> >>mint=2msec, maxt=2msec
> >>
> >>go2 is a plain sequential job that does twice as much I/O as go1.
> >>Given that the size of the file being written to has not changed
> >>between the runs one could guess that fio simply wrapped around and
> >>started from the first offset (0) to write the second MB of data.
> >>Given this isn't it a fair assumption that when doing a skipping
> >>workload if io_limit is used (as in go4) and an offset beyond the
> >>end of the device is produced the same wraparound behaviour as go2
> >>should occur and the total io done should match that specified in
> >>io_limit?
> >
> >I would agree on that, behavior for those cases _should_ be the same.
> >Without the holed IO, it closes/reopens the file and repeats the 1M
> >writes. With it, it does not. I will take a look.
> 
> Does the attached fix it up?

The patch fixes
fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --name=go --io_limit=2M
but not
fio --bs=512k --rw=write --filename=/dev/shm/1M --name=go --number_io=4
or
fio --bs=4k --rw=write --filename=/dev/shm/1M --name=go --zoneskip=4k --zonesize=4k --io_limit=2M

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28 19:44                 ` Sitsofe Wheeler
@ 2014-09-28 22:13                   ` Jens Axboe
  2014-09-29  5:42                     ` Sitsofe Wheeler
  2014-09-29  7:41                     ` Sitsofe Wheeler
  0 siblings, 2 replies; 19+ messages in thread
From: Jens Axboe @ 2014-09-28 22:13 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

[-- Attachment #1: Type: text/plain, Size: 2522 bytes --]

On 09/28/2014 01:44 PM, Sitsofe Wheeler wrote:
> On Sun, Sep 28, 2014 at 09:08:31AM -0600, Jens Axboe wrote:
>> On 2014-09-28 08:24, Jens Axboe wrote:
>>> On 2014-09-28 04:36, Sitsofe Wheeler wrote:
>>>> I guess I would have thought io_limit always forced wraparound. For
>>>> example:
>>>>
>>>> # dd if=/dev/zero of=/dev/shm/1M bs=1M count=1 # fio --bs=4k
>>>> --filename=/dev/shm/1M --name=go1 --rw=write [...] Run status group
>>>> 0 (all jobs): WRITE: io=1024KB, aggrb=341333KB/s, minb=341333KB/s,
>>>> maxb=341333KB/s, mint=3msec, maxt=3msec # fio --bs=4k
>>>> --filename=/dev/shm/1M --name=go2 --io_limit=2M --rw=write Run
>>>> status group 0 (all jobs): WRITE: io=2048KB, aggrb=341333KB/s,
>>>> minb=341333KB/s, maxb=341333KB/s, mint=6msec, maxt=6msec [...] # fio
>>>> --bs=4k --filename=/dev/shm/1M --name=go3 --io_limit=2M
>>>> --rw=write:4k [...] WRITE: io=512KB, aggrb=256000KB/s,
>>>> minb=256000KB/s, maxb=256000KB/s, mint=2msec, maxt=2msec Run status
>>>> group 0 (all jobs): # fio --bs=4k --filename=/dev/shm/1M --name=go4
>>>> --io_limit=2M --rw=write:4k [...] Run status group 0 (all jobs):
>>>> WRITE: io=512KB, aggrb=256000KB/s, minb=256000KB/s, maxb=256000KB/s,
>>>> mint=2msec, maxt=2msec
>>>>
>>>> go2 is a plain sequential job that does twice as much I/O as go1.
>>>> Given that the size of the file being written to has not changed
>>>> between the runs one could guess that fio simply wrapped around and
>>>> started from the first offset (0) to write the second MB of data.
>>>> Given this isn't it a fair assumption that when doing a skipping
>>>> workload if io_limit is used (as in go4) and an offset beyond the
>>>> end of the device is produced the same wraparound behaviour as go2
>>>> should occur and the total io done should match that specified in
>>>> io_limit?
>>>
>>> I would agree on that, behavior for those cases _should_ be the same.
>>> Without the holed IO, it closes/reopens the file and repeats the 1M
>>> writes. With it, it does not. I will take a look.
>>
>> Does the attached fix it up?
> 
> The patch fixes
> fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --name=go --io_limit=2M
> but not
> fio --bs=512k --rw=write --filename=/dev/shm/1M --name=go --number_io=4

number_ios=x is implemented as a cap, not a forced "must complete this
amount of ios to be done".

> or
> fio --bs=4k --rw=write --filename=/dev/shm/1M --name=go --zoneskip=4k --zonesize=4k --io_limit=2M

That one is a bit more tricky. Oh, try the attached (keep the previous
applied).

-- 
Jens Axboe


[-- Attachment #2: zone-skip.patch --]
[-- Type: text/x-patch, Size: 710 bytes --]

diff --git a/io_u.c b/io_u.c
index 8546899c03e7..02f600be3126 100644
--- a/io_u.c
+++ b/io_u.c
@@ -748,9 +751,13 @@ static int fill_io_u(struct thread_data *td, struct io_u *io_u)
 	 * See if it's time to switch to a new zone
 	 */
 	if (td->zone_bytes >= td->o.zone_size && td->o.zone_skip) {
+		struct fio_file *f = io_u->file;
+
 		td->zone_bytes = 0;
-		io_u->file->file_offset += td->o.zone_range + td->o.zone_skip;
-		io_u->file->last_pos = io_u->file->file_offset;
+		f->file_offset += td->o.zone_range + td->o.zone_skip;
+		if (f->file_offset >= f->real_file_size)
+			f->file_offset = f->real_file_size - f->file_offset;
+		f->last_pos = f->file_offset;
 		td->io_skip_bytes += td->o.zone_skip;
 	}
 

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28 22:13                   ` Jens Axboe
@ 2014-09-29  5:42                     ` Sitsofe Wheeler
  2014-09-29  7:41                     ` Sitsofe Wheeler
  1 sibling, 0 replies; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-29  5:42 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

[-- Attachment #1: Type: text/plain, Size: 850 bytes --]

On 28 September 2014 23:13, Jens Axboe <axboe@kernel.dk> wrote:

> On 09/28/2014 01:44 PM, Sitsofe Wheeler wrote:
> > The patch fixes
> > fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --name=go --io_limit=2M
> > but not
> > fio --bs=512k --rw=write --filename=/dev/shm/1M --name=go --number_io=4
>
> number_ios=x is implemented as a cap, not a forced "must complete this
> amount of ios to be done".
>

Ah I see. Now I look again the HOWTO even says it doesn't extend the number
of I/Os - I should have read more carefully!


> > or
> > fio --bs=4k --rw=write --filename=/dev/shm/1M --name=go --zoneskip=4k
> --zonesize=4k --io_limit=2M
>
> That one is a bit more tricky. Oh, try the attached (keep the previous
> applied).


 Latest patch fixes zone skipping for me too. Akira are things better for
you too?

-- 
Sitsofe | http://sucs.org/~sits/

[-- Attachment #2: Type: text/html, Size: 1494 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-28 22:13                   ` Jens Axboe
  2014-09-29  5:42                     ` Sitsofe Wheeler
@ 2014-09-29  7:41                     ` Sitsofe Wheeler
  2014-09-30  1:23                       ` Akira Hayakawa
  1 sibling, 1 reply; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-09-29  7:41 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Akira Hayakawa, Andrey Kuzmin, fio

(Resending because list rejected HTML part)

On 28 September 2014 23:13, Jens Axboe <axboe@kernel.dk> wrote:

> On 09/28/2014 01:44 PM, Sitsofe Wheeler wrote:
> > The patch fixes
> > fio --bs=4k --rw=write:4k --filename=/dev/shm/1M --name=go --io_limit=2M
> > but not
> > fio --bs=512k --rw=write --filename=/dev/shm/1M --name=go --number_io=4
>
> number_ios=x is implemented as a cap, not a forced "must complete this
> amount of ios to be done".

Ah I see. Now I look again the HOWTO even says it doesn't extend the number
of I/Os - I should have read more carefully!

> > or
> > fio --bs=4k --rw=write --filename=/dev/shm/1M --name=go --zoneskip=4k
> --zonesize=4k --io_limit=2M
>
> That one is a bit more tricky. Oh, try the attached (keep the previous
> applied).

Latest patch fixes zone skipping for me too. Akira are things better for
you too?

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-29  7:41                     ` Sitsofe Wheeler
@ 2014-09-30  1:23                       ` Akira Hayakawa
  2014-09-30  2:21                         ` Jens Axboe
  2014-10-05  7:15                         ` Akira Hayakawa
  0 siblings, 2 replies; 19+ messages in thread
From: Akira Hayakawa @ 2014-09-30  1:23 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Jens Axboe, Akira Hayakawa, Andrey Kuzmin, fio

> Latest patch fixes zone skipping for me too. Akira are things better for
> you too?
Thanks Sitsofe,
I will make my time on this weekend to try your patch out.
But, at the first look, it looks fine.

- Akira


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-30  1:23                       ` Akira Hayakawa
@ 2014-09-30  2:21                         ` Jens Axboe
  2014-10-05  7:15                         ` Akira Hayakawa
  1 sibling, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2014-09-30  2:21 UTC (permalink / raw)
  To: Akira Hayakawa, Sitsofe Wheeler; +Cc: Andrey Kuzmin, fio

On 2014-09-29 19:23, Akira Hayakawa wrote:
>> Latest patch fixes zone skipping for me too. Akira are things better for
>> you too?
> Thanks Sitsofe,
> I will make my time on this weekend to try your patch out.
> But, at the first look, it looks fine.

Just update to latest -git, both fixes have been committed.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Question] How to perform stride access?
  2014-09-30  1:23                       ` Akira Hayakawa
  2014-09-30  2:21                         ` Jens Axboe
@ 2014-10-05  7:15                         ` Akira Hayakawa
  1 sibling, 0 replies; 19+ messages in thread
From: Akira Hayakawa @ 2014-10-05  7:15 UTC (permalink / raw)
  To: sitsofe; +Cc: axboe, andrey.v.kuzmin, fio

The latest fio works as I expected.
It wraps around until certain amount (specified by io_limit) ends.

Thanks,

- Akira

On 9/30/14 10:23 AM, Akira Hayakawa wrote:
>> Latest patch fixes zone skipping for me too. Akira are things better for
>> you too?
> Thanks Sitsofe,
> I will make my time on this weekend to try your patch out.
> But, at the first look, it looks fine.
> 
> - Akira
> 



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2014-10-05  7:15 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-23 13:47 [Question] How to perform stride access? Akira Hayakawa
2014-09-23 14:05 ` Andrey Kuzmin
2014-09-24  8:35   ` Akira Hayakawa
     [not found]     ` <CANvN+emPbk+MwwNoABs-rdWdJbn+JD+O0GVAGftR4w7mNVndcg@mail.gmail.com>
2014-09-24  9:28       ` Akira Hayakawa
2014-09-24  9:52     ` Sitsofe Wheeler
2014-09-24  9:58       ` Akira Hayakawa
2014-09-24 21:22       ` Sitsofe Wheeler
2014-09-28  2:24         ` Jens Axboe
2014-09-28 10:32           ` Sitsofe Wheeler
2014-09-28 10:36           ` Sitsofe Wheeler
2014-09-28 14:24             ` Jens Axboe
2014-09-28 15:08               ` Jens Axboe
2014-09-28 19:44                 ` Sitsofe Wheeler
2014-09-28 22:13                   ` Jens Axboe
2014-09-29  5:42                     ` Sitsofe Wheeler
2014-09-29  7:41                     ` Sitsofe Wheeler
2014-09-30  1:23                       ` Akira Hayakawa
2014-09-30  2:21                         ` Jens Axboe
2014-10-05  7:15                         ` Akira Hayakawa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.