All of lore.kernel.org
 help / color / mirror / Atom feed
* Limit LBA Range
@ 2014-09-29  3:28 Jon Tango
  2014-09-29 19:33 ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: Jon Tango @ 2014-09-29  3:28 UTC (permalink / raw)
  To: fio

I am using windows server 2012R2 and am attempting to limit the LBA range of
the test. It doesn't seem to be working, I am using this, is it correct?

[global]
name=4ktest
filename=\\.\physicaldrive1
direct=1
numjobs=8
norandommap
ba=4k
time_based
size=745g
log_avg_msec=100000
group_reporting=1

#########################################################

[4K Precon]
stonewall
runtime=15000
iodepth=32
bs=4k
rw=randwrite


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-29  3:28 Limit LBA Range Jon Tango
@ 2014-09-29 19:33 ` Jens Axboe
  2014-09-29 20:46   ` Jon Tango
  2014-09-29 21:01   ` Jon Tango
  0 siblings, 2 replies; 26+ messages in thread
From: Jens Axboe @ 2014-09-29 19:33 UTC (permalink / raw)
  To: Jon Tango, fio

On 2014-09-28 21:28, Jon Tango wrote:
> I am using windows server 2012R2 and am attempting to limit the LBA range of
> the test. It doesn't seem to be working, I am using this, is it correct?
>
> [global]
> name=4ktest
> filename=\\.\physicaldrive1
> direct=1
> numjobs=8
> norandommap
> ba=4k
> time_based
> size=745g
> log_avg_msec=100000
> group_reporting=1
>
> #########################################################
>
> [4K Precon]
> stonewall
> runtime=15000
> iodepth=32
> bs=4k
> rw=randwrite

The above will be doing IO between 0 and 745G, and do 745G worth of IO 
(in other words, the full range given). Not sure how to answer your 
question more precisely, as you don't mention what LBAs you want your 
test to be limited to.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-29 19:33 ` Jens Axboe
@ 2014-09-29 20:46   ` Jon Tango
  2014-09-29 20:59     ` Jens Axboe
  2014-09-29 21:01   ` Jon Tango
  1 sibling, 1 reply; 26+ messages in thread
From: Jon Tango @ 2014-09-29 20:46 UTC (permalink / raw)
  To: 'Jens Axboe', fio

Thanks for the reply. the capacity of the SSD is 850GB, but I am trying to
run the workload continuously between 0 and 745GB of the capacity. I would
like to workload to be time based so that it continues to run, even if more
than 745 GB of data is written, but still within that same range. Is there a
method of continuing to run the workload from 0 to 745GB, even if more than
745GB is written? 

-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf
Of Jens Axboe
Sent: Monday, September 29, 2014 2:34 PM
To: Jon Tango; fio@vger.kernel.org
Subject: Re: Limit LBA Range

On 2014-09-28 21:28, Jon Tango wrote:
> I am using windows server 2012R2 and am attempting to limit the LBA 
> range of the test. It doesn't seem to be working, I am using this, is it
correct?
>
> [global]
> name=4ktest
> filename=\\.\physicaldrive1
> direct=1
> numjobs=8
> norandommap
> ba=4k
> time_based
> size=745g
> log_avg_msec=100000
> group_reporting=1
>
> #########################################################
>
> [4K Precon]
> stonewall
> runtime=15000
> iodepth=32
> bs=4k
> rw=randwrite

The above will be doing IO between 0 and 745G, and do 745G worth of IO (in
other words, the full range given). Not sure how to answer your question
more precisely, as you don't mention what LBAs you want your test to be
limited to.

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in the body
of a message to majordomo@vger.kernel.org More majordomo info at
http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-29 20:46   ` Jon Tango
@ 2014-09-29 20:59     ` Jens Axboe
  2014-09-29 21:10       ` Jon Tango
  0 siblings, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2014-09-29 20:59 UTC (permalink / raw)
  To: Jon Tango, fio

On 2014-09-29 14:46, Jon Tango wrote:
> Thanks for the reply. the capacity of the SSD is 850GB, but I am trying to
> run the workload continuously between 0 and 745GB of the capacity. I would
> like to workload to be time based so that it continues to run, even if more
> than 745 GB of data is written, but still within that same range. Is there a
> method of continuing to run the workload from 0 to 745GB, even if more than
> 745GB is written?

(please don't top post)

Sure, if you want it to run for 4h within that range, just do:

runtime=4h
timed_based

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-29 19:33 ` Jens Axboe
  2014-09-29 20:46   ` Jon Tango
@ 2014-09-29 21:01   ` Jon Tango
  1 sibling, 0 replies; 26+ messages in thread
From: Jon Tango @ 2014-09-29 21:01 UTC (permalink / raw)
  To: 'Jens Axboe', fio

The capacity of the SSD is 850GB, but I am attempting to run the workload
from 0 to745GB of the capacity. I would like for the workload to continue
for 15,000 seconds, even if more than 745GB of data is written. Is there a
method for that?

-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf
Of Jens Axboe
Sent: Monday, September 29, 2014 2:34 PM
To: Jon Tango; fio@vger.kernel.org
Subject: Re: Limit LBA Range

On 2014-09-28 21:28, Jon Tango wrote:
> I am using windows server 2012R2 and am attempting to limit the LBA 
> range of the test. It doesn't seem to be working, I am using this, is it
correct?
>
> [global]
> name=4ktest
> filename=\\.\physicaldrive1
> direct=1
> numjobs=8
> norandommap
> ba=4k
> time_based
> size=745g
> log_avg_msec=100000
> group_reporting=1
>
> #########################################################
>
> [4K Precon]
> stonewall
> runtime=15000
> iodepth=32
> bs=4k
> rw=randwrite

The above will be doing IO between 0 and 745G, and do 745G worth of IO (in
other words, the full range given). Not sure how to answer your question
more precisely, as you don't mention what LBAs you want your test to be
limited to.

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in the body
of a message to majordomo@vger.kernel.org More majordomo info at
http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-29 20:59     ` Jens Axboe
@ 2014-09-29 21:10       ` Jon Tango
  2014-09-29 21:21         ` Jens Axboe
  2014-09-30  2:23         ` Sitsofe Wheeler
  0 siblings, 2 replies; 26+ messages in thread
From: Jon Tango @ 2014-09-29 21:10 UTC (permalink / raw)
  To: 'Jens Axboe', fio



-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf
Of Jens Axboe
Sent: Monday, September 29, 2014 4:00 PM
To: Jon Tango; fio@vger.kernel.org
Subject: Re: Limit LBA Range

On 2014-09-29 14:46, Jon Tango wrote:
> Thanks for the reply. the capacity of the SSD is 850GB, but I am 
> trying to run the workload continuously between 0 and 745GB of the 
> capacity. I would like to workload to be time based so that it 
> continues to run, even if more than 745 GB of data is written, but 
> still within that same range. Is there a method of continuing to run 
> the workload from 0 to 745GB, even if more than 745GB is written?

(please don't top post)

Sure, if you want it to run for 4h within that range, just do:

runtime=4h
timed_based

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in the body
of a message to majordomo@vger.kernel.org More majordomo info at
http://vger.kernel.org/majordomo-info.html

I include both of those commands, in addition to size=745g, and it seems to
be conducting the workload over the entire LBA range. Is this the correct
combination of the three parameters? Here is the test script: 

 [global]
name=4ktest
filename=\\.\physicaldrive1
direct=1
numjobs=8
norandommap
ba=4k
time_based
size=745g
log_avg_msec=100000
group_reporting=1
#########################################################

[4K Precon]
stonewall
runtime=15000
iodepth=32
bs=4k
rw=randwrite



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-29 21:10       ` Jon Tango
@ 2014-09-29 21:21         ` Jens Axboe
  2014-09-29 21:37           ` Jon Tango
  2014-09-30  2:23         ` Sitsofe Wheeler
  1 sibling, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2014-09-29 21:21 UTC (permalink / raw)
  To: Jon Tango, fio

On 2014-09-29 15:10, Jon Tango wrote:
> I include both of those commands, in addition to size=745g, and it seems to
> be conducting the workload over the entire LBA range. Is this the correct
> combination of the three parameters? Here is the test script:

When you explicitly set size=745G, the range is 0..745G as previously 
specified.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-29 21:21         ` Jens Axboe
@ 2014-09-29 21:37           ` Jon Tango
  2014-09-29 21:38             ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: Jon Tango @ 2014-09-29 21:37 UTC (permalink / raw)
  To: 'Jens Axboe', fio


On 2014-09-29 15:10, Jon Tango wrote:
> I include both of those commands, in addition to size=745g, and it 
> seems to be conducting the workload over the entire LBA range. Is this 
> the correct combination of the three parameters? Here is the test script:

When you explicitly set size=745G, the range is 0..745G as previously
specified.

--
Jens Axboe

--
So it would be both time_based and limited to 745G as well? The workload
will continue to run even if over 745GB of data is written, but only to the
0..745GB range?

size=745G
time_based
runtime=4h


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-29 21:37           ` Jon Tango
@ 2014-09-29 21:38             ` Jens Axboe
  0 siblings, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2014-09-29 21:38 UTC (permalink / raw)
  To: Jon Tango, fio

On 2014-09-29 15:37, Jon Tango wrote:
>
> On 2014-09-29 15:10, Jon Tango wrote:
>> I include both of those commands, in addition to size=745g, and it
>> seems to be conducting the workload over the entire LBA range. Is this
>> the correct combination of the three parameters? Here is the test script:
>
> When you explicitly set size=745G, the range is 0..745G as previously
> specified.
>
> --
> Jens Axboe
>
> --
> So it would be both time_based and limited to 745G as well? The workload
> will continue to run even if over 745GB of data is written, but only to the
> 0..745GB range?
>
> size=745G
> time_based
> runtime=4h

Yes and yes

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-29 21:10       ` Jon Tango
  2014-09-29 21:21         ` Jens Axboe
@ 2014-09-30  2:23         ` Sitsofe Wheeler
  2014-09-30  2:49           ` Jens Axboe
  1 sibling, 1 reply; 26+ messages in thread
From: Sitsofe Wheeler @ 2014-09-30  2:23 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, fio

On 29 September 2014 22:10, Jon Tango <cheerios123@outlook.com> wrote:
>
> I include both of those commands, in addition to size=745g, and it seems to
> be conducting the workload over the entire LBA range. Is this the correct
> combination of the three parameters? Here is the test script:
>
>  [global]
> name=4ktest
> filename=\\.\physicaldrive1
> direct=1
> numjobs=8
> norandommap
> ba=4k
> time_based
> size=745g
> log_avg_msec=100000
> group_reporting=1
> #########################################################
>
> [4K Precon]
> stonewall
> runtime=15000
> iodepth=32
> bs=4k
> rw=randwrite

Bear in mind that because you are asking for eight stonewalled jobs
this fio run will take 8 * 15000 seconds (around 33 hours) to finish
because after the first job has run for four hours the second job will
start etc.

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  2:23         ` Sitsofe Wheeler
@ 2014-09-30  2:49           ` Jens Axboe
  2014-09-30  3:18             ` Sitsofe Wheeler
  2014-09-30  4:57             ` Jon Tango
  0 siblings, 2 replies; 26+ messages in thread
From: Jens Axboe @ 2014-09-30  2:49 UTC (permalink / raw)
  To: Sitsofe Wheeler, Jon Tango; +Cc: fio

On 2014-09-29 20:23, Sitsofe Wheeler wrote:
> On 29 September 2014 22:10, Jon Tango <cheerios123@outlook.com> wrote:
>>
>> I include both of those commands, in addition to size=745g, and it seems to
>> be conducting the workload over the entire LBA range. Is this the correct
>> combination of the three parameters? Here is the test script:
>>
>>   [global]
>> name=4ktest
>> filename=\\.\physicaldrive1
>> direct=1
>> numjobs=8
>> norandommap
>> ba=4k
>> time_based
>> size=745g
>> log_avg_msec=100000
>> group_reporting=1
>> #########################################################
>>
>> [4K Precon]
>> stonewall
>> runtime=15000
>> iodepth=32
>> bs=4k
>> rw=randwrite
>
> Bear in mind that because you are asking for eight stonewalled jobs
> this fio run will take 8 * 15000 seconds (around 33 hours) to finish
> because after the first job has run for four hours the second job will
> start etc.

Since they are grouped with numjobs=x, they belong to the same group. 
Hence the stonewall isn't going to do anything here. If it was split in 
two precondition sections, ala:

[global]
numjobs=4
...

[4K Precon stage 1]
runtime=15000
iodepth=32
bs=4k
rw=randwrite

[4K Precon stage 2]
stonewall
runtime=15000
iodepth=32
bs=4k
rw=randwrite

Then stage 2 would not start until stage 1 had finished.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  2:49           ` Jens Axboe
@ 2014-09-30  3:18             ` Sitsofe Wheeler
  2014-09-30  4:57             ` Jon Tango
  1 sibling, 0 replies; 26+ messages in thread
From: Sitsofe Wheeler @ 2014-09-30  3:18 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Jon Tango, fio

On 30 September 2014 03:49, Jens Axboe <axboe@kernel.dk> wrote:
> Since they are grouped with numjobs=x, they belong to the same group. Hence

My bad - thanks for the correction Jens!

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-30  2:49           ` Jens Axboe
  2014-09-30  3:18             ` Sitsofe Wheeler
@ 2014-09-30  4:57             ` Jon Tango
  2014-09-30  6:23               ` Sitsofe Wheeler
  1 sibling, 1 reply; 26+ messages in thread
From: Jon Tango @ 2014-09-30  4:57 UTC (permalink / raw)
  To: 'Jens Axboe', 'Sitsofe Wheeler'; +Cc: fio

>> I include both of those commands, in addition to size=745g, and it 
>> seems to be conducting the workload over the entire LBA range. Is 
>> this the correct combination of the three parameters? Here is the test script:
>>
>>   [global]
>> name=4ktest
>> filename=\\.\physicaldrive1
>> direct=1
>> numjobs=8
>> norandommap
>> ba=4k
>> time_based
>> size=745g
>> log_avg_msec=100000
>> group_reporting=1
>> #########################################################
>>
>> [4K Precon]
>> stonewall
>> runtime=15000
>> iodepth=32
>> bs=4k
>> rw=randwrite
>
> Bear in mind that because you are asking for eight stonewalled jobs 
> this fio run will take 8 * 15000 seconds (around 33 hours) to finish 
> because after the first job has run for four hours the second job will 
> start etc.

Since they are grouped with numjobs=x, they belong to the same group. 
Hence the stonewall isn't going to do anything here. If it was split in two precondition sections, ala:

[global]
numjobs=4
...

[4K Precon stage 1]
runtime=15000
iodepth=32
bs=4k
rw=randwrite

[4K Precon stage 2]
stonewall
runtime=15000
iodepth=32
bs=4k
rw=randwrite

Then stage 2 would not start until stage 1 had finished.

--
Jens Axboe

--

I have tested with these options several times in a Windows environment. SSDs experience higher performance if more area of the drive is left 'spare'(overprovisioning), and when limiting the LBA range with similar methodology the same SSD features higher speed when tested with VDBench. 
Is it possible there is a bug with fio in windows?


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  4:57             ` Jon Tango
@ 2014-09-30  6:23               ` Sitsofe Wheeler
  2014-09-30  6:34                 ` Jon Tango
  0 siblings, 1 reply; 26+ messages in thread
From: Sitsofe Wheeler @ 2014-09-30  6:23 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, fio

On 30 September 2014 05:57, Jon Tango <cheerios123@outlook.com> wrote:
>
> I have tested with these options several times in a Windows environment. SSDs experience higher performance if more area of the drive is left 'spare'(overprovisioning), and when limiting the LBA range with similar methodology the same SSD features higher speed when tested with VDBench.
> Is it possible there is a bug with fio in windows?

It's impossible to even begin to say without knowing what your vdbench
taskfile looked like...

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-30  6:23               ` Sitsofe Wheeler
@ 2014-09-30  6:34                 ` Jon Tango
  2014-09-30  7:36                   ` Sitsofe Wheeler
  0 siblings, 1 reply; 26+ messages in thread
From: Jon Tango @ 2014-09-30  6:34 UTC (permalink / raw)
  To: 'Sitsofe Wheeler'; +Cc: 'Jens Axboe', fio


>
> I have tested with these options several times in a Windows environment. SSDs experience higher performance if more area of the drive is left 'spare'(overprovisioning), and when limiting the LBA range with similar methodology the same SSD features higher speed when tested with VDBench.
> Is it possible there is a bug with fio in windows?

It's impossible to even begin to say without knowing what your vdbench taskfile looked like...

--
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

The taskfile is this: 


>>   [global]
>> name=4ktest
>> filename=\\.\physicaldrive1
>> direct=1
>> numjobs=8
>> norandommap
>> ba=4k
>> time_based
>> size=745g
>> log_avg_msec=100000
>> group_reporting=1
>> #########################################################
>>
>> [4K Precon]
>> stonewall
>> runtime=15000
>> iodepth=32
>> bs=4k
>> rw=randwrite
>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  6:34                 ` Jon Tango
@ 2014-09-30  7:36                   ` Sitsofe Wheeler
  2014-09-30  7:56                     ` Jon Tango
  2014-09-30  8:57                     ` Andrey Kuzmin
  0 siblings, 2 replies; 26+ messages in thread
From: Sitsofe Wheeler @ 2014-09-30  7:36 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, fio

On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
>
> The taskfile is this:

I should have been more specific - you need to show both the _vdbench_
parameter file that you are comparing to in addition to showing your
fio job file.

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-30  7:36                   ` Sitsofe Wheeler
@ 2014-09-30  7:56                     ` Jon Tango
  2014-09-30 13:07                       ` Sitsofe Wheeler
  2014-09-30 14:44                       ` Jens Axboe
  2014-09-30  8:57                     ` Andrey Kuzmin
  1 sibling, 2 replies; 26+ messages in thread
From: Jon Tango @ 2014-09-30  7:56 UTC (permalink / raw)
  To: 'Sitsofe Wheeler'; +Cc: 'Jens Axboe', fio



On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
>
> The taskfile is this:

I should have been more specific - you need to show both the _vdbench_ parameter file that you are comparing to in addition to showing your fio job file.

--
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

Here you go :) VDBench uses percentages for specifying the LBA range. 

hd=localhost,clients=4,jvms=4
sd=s1,lun=\\.\PhysicalDrive1,align=4096,range=(0,86)
*
sd=default,offset=4096,align=4096
wd=wd1,sd=s1,rdpct=0,seekpct=100
*
rd=rd1,wd=wd1,iorate=max,forthreads=128,xfersize=4k,elapsed=18000,interval=1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  7:36                   ` Sitsofe Wheeler
  2014-09-30  7:56                     ` Jon Tango
@ 2014-09-30  8:57                     ` Andrey Kuzmin
  2014-09-30  9:14                       ` Jon Tango
  2014-09-30 14:47                       ` Jens Axboe
  1 sibling, 2 replies; 26+ messages in thread
From: Andrey Kuzmin @ 2014-09-30  8:57 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, Sitsofe Wheeler, fio

The same thing could be done by partitioning the SSD in the operating
system with the desired partition size, and then running a time-based
fio job against the partition.

Basically, fio options offset/size should yield the same net result,
but I'm always unsure on what is the actual effect of mixing size and
time_based in a single job, and whether fio will wrap around the
specified size to the specified offset if the job is time-based. HOWTO
could have used some clarification on size/time_based interaction.

Regards,
Andrey

On Sep 30, 2014 11:41 AM, "Sitsofe Wheeler" <sitsofe@gmail.com> wrote:
>
> On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
> >
> > The taskfile is this:
>
> I should have been more specific - you need to show both the _vdbench_
> parameter file that you are comparing to in addition to showing your
> fio job file.
>
> --
> Sitsofe |http://sucs.org/~sits/
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message tomajordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Limit LBA Range
  2014-09-30  8:57                     ` Andrey Kuzmin
@ 2014-09-30  9:14                       ` Jon Tango
  2014-09-30  9:17                         ` Andrey Kuzmin
  2014-09-30 14:47                       ` Jens Axboe
  1 sibling, 1 reply; 26+ messages in thread
From: Jon Tango @ 2014-09-30  9:14 UTC (permalink / raw)
  To: 'Andrey Kuzmin'
  Cc: 'Jens Axboe', 'Sitsofe Wheeler', fio


The same thing could be done by partitioning the SSD in the operating system with the desired partition size, and then running a time-based fio job against the partition.

Basically, fio options offset/size should yield the same net result, but I'm always unsure on what is the actual effect of mixing size and time_based in a single job, and whether fio will wrap around the specified size to the specified offset if the job is time-based. HOWTO could have used some clarification on size/time_based interaction.

Regards,
Andrey

On Sep 30, 2014 11:41 AM, "Sitsofe Wheeler" <sitsofe@gmail.com> wrote:
>
> On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
> >
> > The taskfile is this:
>
> I should have been more specific - you need to show both the _vdbench_ 
> parameter file that you are comparing to in addition to showing your 
> fio job file.
>
> --
> Sitsofe |http://sucs.org/~sits/
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in the 
> body of a message tomajordomo@vger.kernel.org More majordomo info at 
> http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

I'm really wanting to avoid the filesystem, they create so much interference. I am focusing on just testing the raw device :)


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  9:14                       ` Jon Tango
@ 2014-09-30  9:17                         ` Andrey Kuzmin
  2014-09-30 14:48                           ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: Andrey Kuzmin @ 2014-09-30  9:17 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, Sitsofe Wheeler, fio

On Tue, Sep 30, 2014 at 1:14 PM, Jon Tango <cheerios123@outlook.com> wrote:
>
> The same thing could be done by partitioning the SSD in the operating system with the desired partition size, and then running a time-based fio job against the partition.
>
> Basically, fio options offset/size should yield the same net result, but I'm always unsure on what is the actual effect of mixing size and time_based in a single job, and whether fio will wrap around the specified size to the specified offset if the job is time-based. HOWTO could have used some clarification on size/time_based interaction.
>
> Regards,
> Andrey
>
> On Sep 30, 2014 11:41 AM, "Sitsofe Wheeler" <sitsofe@gmail.com> wrote:
>>
>> On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
>> >
>> > The taskfile is this:
>>
>> I should have been more specific - you need to show both the _vdbench_
>> parameter file that you are comparing to in addition to showing your
>> fio job file.
>>
>> --
>> Sitsofe |http://sucs.org/~sits/
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in the
>> body of a message tomajordomo@vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> I'm really wanting to avoid the filesystem, they create so much interference. I am focusing on just testing the raw device :)

I'm not sure why a filesystem should be involved in partitioning, that
happens at the raw device level.

Regards,
Andrey


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  7:56                     ` Jon Tango
@ 2014-09-30 13:07                       ` Sitsofe Wheeler
  2014-09-30 21:17                         ` Sitsofe Wheeler
  2014-09-30 14:44                       ` Jens Axboe
  1 sibling, 1 reply; 26+ messages in thread
From: Sitsofe Wheeler @ 2014-09-30 13:07 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, fio

On 30 September 2014 08:56, Jon Tango <cheerios123@outlook.com> wrote:
>
> On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
>>
>> The taskfile is this:
>
> I should have been more specific - you need to show both the _vdbench_ parameter file that you are comparing to in addition to showing your fio job file.
>
> Here you go :) VDBench uses percentages for specifying the LBA range.
>
> hd=localhost,clients=4,jvms=4
> sd=s1,lun=\\.\PhysicalDrive1,align=4096,range=(0,86)
> *
> sd=default,offset=4096,align=4096
> wd=wd1,sd=s1,rdpct=0,seekpct=100
> *
> rd=rd1,wd=wd1,iorate=max,forthreads=128,xfersize=4k,elapsed=18000,interval=1

I'm not familiar with vdbench but unpacking the above I'd guess the following:

Define a host definition of
Simulate 4 clients
Override the default process cloning logic and create 4 processes (via
4 JVMs) and run the random workload on each one
Run all process on the local machine

Define a storage definition with the name s1 with the following parameters:
Write to the device \\.\PhysicalDrive1
(Align I/O to 4k but this is redundant as alignment defaults to
xfersize according to the documentation for align=)
Use only the first 86% of the device
Use an sd name of s1 definition name of

Set the following for all future storage definitions:
Only start doing I/O 4096 bytes into the start of the device
(Align I/O to 4k but this is redundant as alignment defaults to
xfersize according to the documentation for align=)

(Since no storage definitions are defined after this point the above
looks redundant)

Define a workload definition called wd1 with the following parameters:
Use storage definition s1
Only do write I/O
Make every I/O go to a different address (100% random)

Define a run definition called rd1 with the following parameters:
Use a workload definition wd1
Run I/O as quickly as possible
Use an I/O depth of 128 by using 128 threads (per process?)
(Use a block size of 4KBytes but this is redundant as xfersize
defaults to 4KBytes according to the documentation for xfersize=)
Do the workload for 18000 seconds (5 hours)
Tell me what's happening every second

Here's what I think your fio workload does:

>  [global]
> name=4ktest
> filename=\\.\physicaldrive1
> direct=1
> numjobs=8
> norandommap
> ba=4k
> time_based
> size=745g
> log_avg_msec=100000
> group_reporting=1
> #########################################################
>
> [4K Precon]
> stonewall
> runtime=15000
> iodepth=32
> bs=4k
> rw=randwrite

Set these as globals to all jobs:
Use a name of 4ktest
Use the disk \\.\physicaldrive1
Write directly to the disk
Spawn each job eight times
Don't worry about trying to write writing all blocks evenly
(align I/O to 4Kbytes but this is redundant because blockalign=int
says it default to the blocksize minimum)
Quit based on time
Only do I/O between using the entirety of the first 745 GBytes of the device
Average stats over 100000 milliseconds
Display stats for jobs as a whole rather than individually

Define an individual job:
(stonewall is discarded because jobs are grouped by numjobs)
Run this job for 15000 seconds (4.1 hours)
Queue I/O up to a depth of 32
Use a block size of 4KBytes
Do random writes.

So I'd argue your vdbench and fio jobs are not doing the same thing.

1. Since you're on Windows fio only has access to threads not
processes whereas vdbench is able to make use of processes and
threads.
2. Your vdbench setup can submit up to a theoretical maximum I/O depth
of 512 whereas you are limiting your fio setup to a theoretical max
I/O depth of 256.
3. Your vdbench setup will not access the first block of your raw disk
(which is where the partition table lives which can trigger extra work
when written).
4. 86% of 850 is 731...
5. We don't know if the random distributions (used to choose the data
being written and the next location to write) that fio and vdbench use
are comparable.
6. Your vdbench is effectively able to submit I/O asynchronously
because it's coming from individual threads but you don't use the
windowsaio ioengine individual fio jobs do something similar.
7. You are running these jobs for different lengths of time.

If all you're trying to do is run a limited random job as quickly as
possible you may find there are simpler fio jobs that will get higher
throughput...

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  7:56                     ` Jon Tango
  2014-09-30 13:07                       ` Sitsofe Wheeler
@ 2014-09-30 14:44                       ` Jens Axboe
  1 sibling, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2014-09-30 14:44 UTC (permalink / raw)
  To: Jon Tango, 'Sitsofe Wheeler'; +Cc: fio

On 2014-09-30 01:56, Jon Tango wrote:
>
>
> On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
>>
>> The taskfile is this:
>
> I should have been more specific - you need to show both the _vdbench_ parameter file that you are comparing to in addition to showing your fio job file.
>
> --
> Sitsofe | http://sucs.org/~sits/
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> Here you go :) VDBench uses percentages for specifying the LBA range.
>
> hd=localhost,clients=4,jvms=4
> sd=s1,lun=\\.\PhysicalDrive1,align=4096,range=(0,86)
> *
> sd=default,offset=4096,align=4096
> wd=wd1,sd=s1,rdpct=0,seekpct=100
> *
> rd=rd1,wd=wd1,iorate=max,forthreads=128,xfersize=4k,elapsed=18000,interval=1

Side note, you can do that in fio, too. Just do

size=86%

and it'll work.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  8:57                     ` Andrey Kuzmin
  2014-09-30  9:14                       ` Jon Tango
@ 2014-09-30 14:47                       ` Jens Axboe
  2014-10-01 16:33                         ` Andrey Kuzmin
  1 sibling, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2014-09-30 14:47 UTC (permalink / raw)
  To: Andrey Kuzmin, Jon Tango; +Cc: Sitsofe Wheeler, fio

On 2014-09-30 02:57, Andrey Kuzmin wrote:
> The same thing could be done by partitioning the SSD in the operating
> system with the desired partition size, and then running a time-based
> fio job against the partition.
>
> Basically, fio options offset/size should yield the same net result,
> but I'm always unsure on what is the actual effect of mixing size and
> time_based in a single job, and whether fio will wrap around the
> specified size to the specified offset if the job is time-based. HOWTO
> could have used some clarification on size/time_based interaction.

The HOWTO currently states:

time_based      If set, fio will run for the duration of the runtime
                 specified even if the file(s) are completely read
                 or written. It will simply loop over the same workload
                 as many times as the runtime allows.

Especially the last sentence should make it clear, that it will simply 
run the specified workload over and over until the time has passed. I'd 
welcome changes to make that clearer, if it doesn't get the point across.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30  9:17                         ` Andrey Kuzmin
@ 2014-09-30 14:48                           ` Jens Axboe
  0 siblings, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2014-09-30 14:48 UTC (permalink / raw)
  To: Andrey Kuzmin, Jon Tango; +Cc: Sitsofe Wheeler, fio

On 2014-09-30 03:17, Andrey Kuzmin wrote:
> On Tue, Sep 30, 2014 at 1:14 PM, Jon Tango <cheerios123@outlook.com> wrote:
>>
>> The same thing could be done by partitioning the SSD in the operating system with the desired partition size, and then running a time-based fio job against the partition.
>>
>> Basically, fio options offset/size should yield the same net result, but I'm always unsure on what is the actual effect of mixing size and time_based in a single job, and whether fio will wrap around the specified size to the specified offset if the job is time-based. HOWTO could have used some clarification on size/time_based interaction.
>>
>> Regards,
>> Andrey
>>
>> On Sep 30, 2014 11:41 AM, "Sitsofe Wheeler" <sitsofe@gmail.com> wrote:
>>>
>>> On 30 September 2014 07:34, Jon Tango <cheerios123@outlook.com> wrote:
>>>>
>>>> The taskfile is this:
>>>
>>> I should have been more specific - you need to show both the _vdbench_
>>> parameter file that you are comparing to in addition to showing your
>>> fio job file.
>>>
>>> --
>>> Sitsofe |http://sucs.org/~sits/
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe fio" in the
>>> body of a message tomajordomo@vger.kernel.org More majordomo info at
>>> http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> I'm really wanting to avoid the filesystem, they create so much interference. I am focusing on just testing the raw device :)
>
> I'm not sure why a filesystem should be involved in partitioning, that
> happens at the raw device level.

I think there's some limitation in Windows that doesn't allow partition 
raw IO. I may be mistaken.

But I agree, partition would work. But just limiting the LBA range with 
size= will do the same job.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30 13:07                       ` Sitsofe Wheeler
@ 2014-09-30 21:17                         ` Sitsofe Wheeler
  0 siblings, 0 replies; 26+ messages in thread
From: Sitsofe Wheeler @ 2014-09-30 21:17 UTC (permalink / raw)
  To: Jon Tango; +Cc: Jens Axboe, fio

On 30 September 2014 14:07, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
> If all you're trying to do is run a limited random job as quickly as
> possible you may find there are simpler fio jobs that will get higher
> throughput...

Perhaps something like the following (this would be the entire file):

[4K Precon]
name=4ktest
filename=\\.\physicaldrive1
ioengine=windowsaio
direct=1
time_based
runtime=18000
size=86%
bs=4k
offset=4k
norandommap
gtod_reduce=1
iodepth=256
thread

Again it's not the same as your vdbench workload but does it show a
different speed to your previous fio job?

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Limit LBA Range
  2014-09-30 14:47                       ` Jens Axboe
@ 2014-10-01 16:33                         ` Andrey Kuzmin
  0 siblings, 0 replies; 26+ messages in thread
From: Andrey Kuzmin @ 2014-10-01 16:33 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Sitsofe Wheeler, fio, Jon Tango

On Sep 30, 2014 6:47 PM, "Jens Axboe" <axboe@kernel.dk> wrote:
>
> On 2014-09-30 02:57, Andrey Kuzmin wrote:
>>
>> The same thing could be done by partitioning the SSD in the operating
>> system with the desired partition size, and then running a time-based
>> fio job against the partition.
>>
>> Basically, fio options offset/size should yield the same net result,
>> but I'm always unsure on what is the actual effect of mixing size and
>> time_based in a single job, and whether fio will wrap around the
>> specified size to the specified offset if the job is time-based. HOWTO
>> could have used some clarification on size/time_based interaction.
>
>
> The HOWTO currently states:
>
> time_based      If set, fio will run for the duration of the runtime
>                 specified even if the file(s) are completely read
>                 or written. It will simply loop over the same workload
>                 as many times as the runtime allows.
>
> Especially the last sentence should make it clear, that it will simply run the specified workload over and over until the time has passed. I'd welcome changes to make that clearer, if it doesn't get the point across.
>

A potentially confusing point is the size and io_size options which
are specified as follows (quoting from
https://github.com/axboe/fio/blob/master/HOWTO)

size=int The total size of file io for this job. Fio will run until
this many bytes has been transferred, unless runtime is
limited by other options (such as 'runtime', for instance).
Unless specific nrfiles and filesize options are given,
fio will divide this size between the available files
specified by the job. If not set, fio will use the full
size of the given files or devices. If the files do not
exist, size must be given. It is also possible to give
size as a percentage between 1 and 100. If size=20% is
given, fio will use 20% of the full size of the given
files or devices.

io_limit=int Normally fio operates within the region set by 'size', which
means that the 'size' option sets both the region and size of
IO to be performed. Sometimes that is not what you want. With
this option, it is possible to define just the amount of IO
that fio should do. For instance, if 'size' is set to 20G and
'io_limit' is set to 5G, fio will perform IO within the first
20G but exit when 5G have been done.

Although, under 'size' description above, runtime exception is
properly made, I - may be it's just individual ;) - didn't get it
clear at first, and had to spend some time on checking with the code
what it does when time_based/runtime is mixed with size and io_size
options. I believe it might be worth an extra sentence saying that
time_based and io_size are mutually exclusive and, for time_based
jobs, (offset/?)size just specify the file/device range to be
utilized, with sequential I/O wrapping around as/if necessitated by
runtime/size. YMMW.

Regards,
Andrey

> --
> Jens Axboe
>


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2014-10-01 16:33 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-29  3:28 Limit LBA Range Jon Tango
2014-09-29 19:33 ` Jens Axboe
2014-09-29 20:46   ` Jon Tango
2014-09-29 20:59     ` Jens Axboe
2014-09-29 21:10       ` Jon Tango
2014-09-29 21:21         ` Jens Axboe
2014-09-29 21:37           ` Jon Tango
2014-09-29 21:38             ` Jens Axboe
2014-09-30  2:23         ` Sitsofe Wheeler
2014-09-30  2:49           ` Jens Axboe
2014-09-30  3:18             ` Sitsofe Wheeler
2014-09-30  4:57             ` Jon Tango
2014-09-30  6:23               ` Sitsofe Wheeler
2014-09-30  6:34                 ` Jon Tango
2014-09-30  7:36                   ` Sitsofe Wheeler
2014-09-30  7:56                     ` Jon Tango
2014-09-30 13:07                       ` Sitsofe Wheeler
2014-09-30 21:17                         ` Sitsofe Wheeler
2014-09-30 14:44                       ` Jens Axboe
2014-09-30  8:57                     ` Andrey Kuzmin
2014-09-30  9:14                       ` Jon Tango
2014-09-30  9:17                         ` Andrey Kuzmin
2014-09-30 14:48                           ` Jens Axboe
2014-09-30 14:47                       ` Jens Axboe
2014-10-01 16:33                         ` Andrey Kuzmin
2014-09-29 21:01   ` Jon Tango

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.