* Re: FIO windows
[not found] ` <MWHPR11MB2045DFA579DB2BDC5B1EA466875E0@MWHPR11MB2045.namprd11.prod.outlook.com>
@ 2017-10-31 18:51 ` Jens Axboe
2017-10-31 19:00 ` Rebecca Cran
2017-10-31 19:46 ` Sitsofe Wheeler
0 siblings, 2 replies; 38+ messages in thread
From: Jens Axboe @ 2017-10-31 18:51 UTC (permalink / raw)
To: David Hare, fio
Not sure, I generally don't touch that side of things. I know that
appveyor builds all of them, but I don't know if they make the build
available.
CC'ing the list, folks there would know...
On 10/31/2017 12:47 PM, David Hare wrote:
> Hi Jens,
>
> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
> -Dave
>
--
Jens Axboe
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 18:51 ` FIO windows Jens Axboe
@ 2017-10-31 19:00 ` Rebecca Cran
2017-10-31 19:07 ` David Hare
2017-10-31 19:46 ` Sitsofe Wheeler
1 sibling, 1 reply; 38+ messages in thread
From: Rebecca Cran @ 2017-10-31 19:00 UTC (permalink / raw)
To: Jens Axboe; +Cc: David Hare, fio
I’ll build the installer later today.
Sent from my iPhone
> On Oct 31, 2017, at 12:51 PM, Jens Axboe <axboe@kernel.dk> wrote:
>
> Not sure, I generally don't touch that side of things. I know that
> appveyor builds all of them, but I don't know if they make the build
> available.
>
> CC'ing the list, folks there would know...
>
>> On 10/31/2017 12:47 PM, David Hare wrote:
>> Hi Jens,
>>
>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>> -Dave
>>
>
> --
> Jens Axboe
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 19:00 ` Rebecca Cran
@ 2017-10-31 19:07 ` David Hare
2017-10-31 19:43 ` Rebecca Cran
0 siblings, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 19:07 UTC (permalink / raw)
To: Rebecca Cran, Jens Axboe; +Cc: fio
[-- Attachment #1: Type: text/plain, Size: 1575 bytes --]
Thanks Rebecca!
Would you mind pinging me with a link when you are done?
Dave Hare
Senior Solutions Specialist M&E
Primary Data
www.primarydata.com
Mobile (818) 687-4401
-----Original Message-----
From: Rebecca Cran [mailto:rebecca@bluestop.org]
Sent: Tuesday, October 31, 2017 12:00 PM
To: Jens Axboe <axboe@kernel.dk>
Cc: David Hare <david.hare@primarydata.com>; fio@vger.kernel.org
Subject: Re: FIO windows
I’ll build the installer later today.
Sent from my iPhone
> On Oct 31, 2017, at 12:51 PM, Jens Axboe <axboe@kernel.dk> wrote:
>
> Not sure, I generally don't touch that side of things. I know that
> appveyor builds all of them, but I don't know if they make the build
> available.
>
> CC'ing the list, folks there would know...
>
>> On 10/31/2017 12:47 PM, David Hare wrote:
>> Hi Jens,
>>
>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>> -Dave
>>
>
> --
> Jens Axboe
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 2366 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 19:07 ` David Hare
@ 2017-10-31 19:43 ` Rebecca Cran
2017-10-31 19:44 ` David Hare
0 siblings, 1 reply; 38+ messages in thread
From: Rebecca Cran @ 2017-10-31 19:43 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
The Windows installers and zip files for 3.1 are available from http://bluestop.org/fio .
—
Rebecca
Sent from my iPhone
> On Oct 31, 2017, at 1:07 PM, David Hare <david.hare@primarydata.com> wrote:
>
> Thanks Rebecca!
>
> Would you mind pinging me with a link when you are done?
>
>
> Dave Hare
> Senior Solutions Specialist M&E
> Primary Data
> www.primarydata.com
> Mobile (818) 687-4401
> Â
>
>
>
> -----Original Message-----
> From: Rebecca Cran [mailto:rebecca@bluestop.org]
> Sent: Tuesday, October 31, 2017 12:00 PM
> To: Jens Axboe <axboe@kernel.dk>
> Cc: David Hare <david.hare@primarydata.com>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> I’ll build the installer later today.
>
> Sent from my iPhone
>
>> On Oct 31, 2017, at 12:51 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>
>> Not sure, I generally don't touch that side of things. I know that
>> appveyor builds all of them, but I don't know if they make the build
>> available.
>>
>> CC'ing the list, folks there would know...
>>
>>> On 10/31/2017 12:47 PM, David Hare wrote:
>>> Hi Jens,
>>>
>>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>>> -Dave
>>>
>>
>> --
>> Jens Axboe
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in the
>> body of a message to majordomo@vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
>
> \x04èº{.nÇ+‰·Ÿ®‰†+%ŠËlzwm…éb맲æìr¸›yø¨Š{ayº\x1dʇڙë,j\a¢f£¢·hš‹àz¹\x1e®w¥¢¸\f¢·¦j:+v‰¨ŠwèjØm¶Ÿÿ¾\a«‘êçzZ+ƒùšŽŠÝ¢j"ú!¶i
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 19:43 ` Rebecca Cran
@ 2017-10-31 19:44 ` David Hare
0 siblings, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 19:44 UTC (permalink / raw)
To: Rebecca Cran; +Cc: Jens Axboe, fio
Thanks Rebecca!
-----Original Message-----
From: Rebecca Cran [mailto:rebecca@bluestop.org]
Sent: Tuesday, October 31, 2017 12:43 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
The Windows installers and zip files for 3.1 are available from http://bluestop.org/fio .
—
Rebecca
Sent from my iPhone
> On Oct 31, 2017, at 1:07 PM, David Hare <david.hare@primarydata.com> wrote:
>
> Thanks Rebecca!
>
> Would you mind pinging me with a link when you are done?
>
>
> Dave Hare
> Senior Solutions Specialist M&E
> Primary Data
> www.primarydata.com
> Mobile (818) 687-4401
> Â
>
>
>
> -----Original Message-----
> From: Rebecca Cran [mailto:rebecca@bluestop.org]
> Sent: Tuesday, October 31, 2017 12:00 PM
> To: Jens Axboe <axboe@kernel.dk>
> Cc: David Hare <david.hare@primarydata.com>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> I’ll build the installer later today.
>
> Sent from my iPhone
>
>> On Oct 31, 2017, at 12:51 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>
>> Not sure, I generally don't touch that side of things. I know that
>> appveyor builds all of them, but I don't know if they make the build
>> available.
>>
>> CC'ing the list, folks there would know...
>>
>>> On 10/31/2017 12:47 PM, David Hare wrote:
>>> Hi Jens,
>>>
>>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>>> -Dave
>>>
>>
>> --
>> Jens Axboe
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in the
>> body of a message to majordomo@vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
>
> \x04èº{.nÇ+‰·Ÿ®‰†+%ŠËlzwm…éb맲æìr¸›yø¨Š{ayº\x1dʇڙë,j ¢f£¢·hš‹àz¹\x1e®w¥¢¸
¢·¦j:+v‰¨ŠwèjØm¶Ÿÿ¾ «‘êçzZ+ƒùšŽŠÝ¢j"ú!¶i
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 18:51 ` FIO windows Jens Axboe
2017-10-31 19:00 ` Rebecca Cran
@ 2017-10-31 19:46 ` Sitsofe Wheeler
2017-10-31 19:49 ` David Hare
2017-10-31 19:50 ` Jens Axboe
1 sibling, 2 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 19:46 UTC (permalink / raw)
To: Jens Axboe; +Cc: David Hare, fio
Hi,
Yes, AppVeyor makes the builds it does available for download. For
example if you look at the latest master version of fio over on
https://github.com/axboe/fio/commits/1de80624466405bccdbc4607d71cd249320da3f1
, click the green tick and choose Details for AppVeyor you'll end up
on https://ci.appveyor.com/project/axboe/fio/build/1.0.380 . You can
then select a build like the x86_64 one and the click the Artifacts
tab taking you to a page like
https://ci.appveyor.com/project/axboe/fio/build/1.0.380/job/ooua768a53epnxeu/artifacts
and you can then download the msi on that page.
On 31 October 2017 at 18:51, Jens Axboe <axboe@kernel.dk> wrote:
> Not sure, I generally don't touch that side of things. I know that
> appveyor builds all of them, but I don't know if they make the build
> available.
>
> CC'ing the list, folks there would know...
>
> On 10/31/2017 12:47 PM, David Hare wrote:
>> Hi Jens,
>>
>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>> -Dave
>>
>
> --
> Jens Axboe
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 19:46 ` Sitsofe Wheeler
@ 2017-10-31 19:49 ` David Hare
2017-10-31 19:50 ` Jens Axboe
1 sibling, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 19:49 UTC (permalink / raw)
To: Sitsofe Wheeler, Jens Axboe; +Cc: fio
[-- Attachment #1: Type: text/plain, Size: 1949 bytes --]
Thanks!
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 12:47 PM
To: Jens Axboe <axboe@kernel.dk>
Cc: David Hare <david.hare@primarydata.com>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
Yes, AppVeyor makes the builds it does available for download. For example if you look at the latest master version of fio over on
https://github.com/axboe/fio/commits/1de80624466405bccdbc4607d71cd249320da3f1
, click the green tick and choose Details for AppVeyor you'll end up on https://ci.appveyor.com/project/axboe/fio/build/1.0.380 . You can then select a build like the x86_64 one and the click the Artifacts tab taking you to a page like https://ci.appveyor.com/project/axboe/fio/build/1.0.380/job/ooua768a53epnxeu/artifacts
and you can then download the msi on that page.
On 31 October 2017 at 18:51, Jens Axboe <axboe@kernel.dk> wrote:
> Not sure, I generally don't touch that side of things. I know that
> appveyor builds all of them, but I don't know if they make the build
> available.
>
> CC'ing the list, folks there would know...
>
> On 10/31/2017 12:47 PM, David Hare wrote:
>> Hi Jens,
>>
>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>> -Dave
>>
>
> --
> Jens Axboe
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 3010 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 19:46 ` Sitsofe Wheeler
2017-10-31 19:49 ` David Hare
@ 2017-10-31 19:50 ` Jens Axboe
2017-10-31 19:55 ` David Hare
2017-10-31 20:23 ` Sitsofe Wheeler
1 sibling, 2 replies; 38+ messages in thread
From: Jens Axboe @ 2017-10-31 19:50 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: David Hare, fio
This is useful information, any chance that could get added
to the documentation?
On 10/31/2017 01:46 PM, Sitsofe Wheeler wrote:
> Hi,
>
> Yes, AppVeyor makes the builds it does available for download. For
> example if you look at the latest master version of fio over on
> https://github.com/axboe/fio/commits/1de80624466405bccdbc4607d71cd249320da3f1
> , click the green tick and choose Details for AppVeyor you'll end up
> on https://ci.appveyor.com/project/axboe/fio/build/1.0.380 . You can
> then select a build like the x86_64 one and the click the Artifacts
> tab taking you to a page like
> https://ci.appveyor.com/project/axboe/fio/build/1.0.380/job/ooua768a53epnxeu/artifacts
> and you can then download the msi on that page.
>
> On 31 October 2017 at 18:51, Jens Axboe <axboe@kernel.dk> wrote:
>> Not sure, I generally don't touch that side of things. I know that
>> appveyor builds all of them, but I don't know if they make the build
>> available.
>>
>> CC'ing the list, folks there would know...
>>
>> On 10/31/2017 12:47 PM, David Hare wrote:
>>> Hi Jens,
>>>
>>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>>> -Dave
>>>
>>
>> --
>> Jens Axboe
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
--
Jens Axboe
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 19:50 ` Jens Axboe
@ 2017-10-31 19:55 ` David Hare
2017-10-31 19:59 ` Sitsofe Wheeler
2017-10-31 20:23 ` Sitsofe Wheeler
1 sibling, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 19:55 UTC (permalink / raw)
To: Jens Axboe, Sitsofe Wheeler; +Cc: fio
[-- Attachment #1: Type: text/plain, Size: 2908 bytes --]
Jens,
Still getting the error below when I add a 9th drive:
fio: pid=6040, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid argument
fio: pid=5880, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid argument
I will follow you direction from this morning and submit a bug report.
-Dave
FIO file:
[global]
ioengine=windowsaio
blocksize=64k
direct=1
iodepth=256
group_reporting
rw=read
size=1g
numjobs=2
thread=8
time_based
runtime=10
;cpus_allowed=1-43
;cpus_allowed_policy=shared
;verify_async_cpus=shared
[asdf]
filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
;N\:\\testfile:
;O\:\\testfile:P\:\\testfile:Q\:\\testfile
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Tuesday, October 31, 2017 12:50 PM
To: Sitsofe Wheeler <sitsofe@gmail.com>
Cc: David Hare <david.hare@primarydata.com>; fio@vger.kernel.org
Subject: Re: FIO windows
This is useful information, any chance that could get added to the documentation?
On 10/31/2017 01:46 PM, Sitsofe Wheeler wrote:
> Hi,
>
> Yes, AppVeyor makes the builds it does available for download. For
> example if you look at the latest master version of fio over on
> https://github.com/axboe/fio/commits/1de80624466405bccdbc4607d71cd2493
> 20da3f1 , click the green tick and choose Details for AppVeyor you'll
> end up on https://ci.appveyor.com/project/axboe/fio/build/1.0.380 .
> You can then select a build like the x86_64 one and the click the
> Artifacts tab taking you to a page like
> https://ci.appveyor.com/project/axboe/fio/build/1.0.380/job/ooua768a53
> epnxeu/artifacts and you can then download the msi on that page.
>
> On 31 October 2017 at 18:51, Jens Axboe <axboe@kernel.dk> wrote:
>> Not sure, I generally don't touch that side of things. I know that
>> appveyor builds all of them, but I don't know if they make the build
>> available.
>>
>> CC'ing the list, folks there would know...
>>
>> On 10/31/2017 12:47 PM, David Hare wrote:
>>> Hi Jens,
>>>
>>> Can't find an installer for 3.1 for windows. Not sure if anyone compiled it yet?
>>> -Dave
>>>
>>
>> --
>> Jens Axboe
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in the
>> body of a message to majordomo@vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
>
>
>
--
Jens Axboe
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 4208 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 19:55 ` David Hare
@ 2017-10-31 19:59 ` Sitsofe Wheeler
2017-10-31 20:21 ` David Hare
0 siblings, 1 reply; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 19:59 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
Any chance you could cut your job file down to the smallest number of
options that still show the problem?
On 31 October 2017 at 19:55, David Hare <david.hare@primarydata.com> wrote:
> Jens,
>
> Still getting the error below when I add a 9th drive:
>
> fio: pid=6040, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid
> argument
> fio: pid=5880, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid
> argument
>
> I will follow you direction from this morning and submit a bug report.
>
> -Dave
>
>
> FIO file:
>
> [global]
> ioengine=windowsaio
> blocksize=64k
> direct=1
> iodepth=256
> group_reporting
>
> rw=read
> size=1g
> numjobs=2
>
> thread=8
> time_based
> runtime=10
>
> ;cpus_allowed=1-43
> ;cpus_allowed_policy=shared
> ;verify_async_cpus=shared
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
For example - should you have a trailing : in this line?
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 19:59 ` Sitsofe Wheeler
@ 2017-10-31 20:21 ` David Hare
2017-10-31 20:29 ` Sitsofe Wheeler
0 siblings, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 20:21 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 4100 bytes --]
I cut the file down and start building it back up. The issue seems to be with "size=1g" if I remove the size it runs.
[global]
ioengine=windowsaio
blocksize=64k
direct=1
iodepth=64
group_reporting
rw=read
;size=1g
numjobs=8
thread=8
time_based
runtime=10
;cpus_allowed=1-43
;cpus_allowed_policy=shared
;verify_async_cpus=shared
[asdf]
filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
;N\:\\testfile:
;O\:\\testfile:P\:\\testfile:Q\:\\testfile
Here the output:
asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=64
...
fio-3.1
Starting 8 threads
asdf: (groupid=0, jobs=8): err= 0: pid=6112: Tue Oct 31 20:17:43 2017
read: IOPS=164k, BW=10.0GiB/s (10.8GB/s)(100GiB/10012msec)
slat (usec): min=8, max=2192, avg=18.69, stdev=16.56
clat (nsec): min=1407, max=43278k, avg=3069841.35, stdev=5194977.44
lat (usec): min=132, max=43586, avg=3088.53, stdev=5195.51
clat percentiles (usec):
| 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 255], 20.00th=[ 289],
| 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 429],
| 70.00th=[ 603], 80.00th=[ 6194], 90.00th=[10290], 95.00th=[17695],
| 99.00th=[18482], 99.50th=[19792], 99.90th=[20841], 99.95th=[21103],
| 99.99th=[24511]
bw ( MiB/s): min= 903, max= 1718, per=12.28%, avg=1260.89, stdev=280.99, samples=160
iops : min=14457, max=27496, avg=20173.74, stdev=4495.88, samples=160
lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=8.89%, 500=58.05%
lat (usec) : 750=4.82%, 1000=0.92%
lat (msec) : 2=0.66%, 4=0.42%, 10=14.28%, 20=11.51%, 50=0.45%
cpu : usr=0.00%, sys=29.97%, ctx=0, majf=0, minf=0
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=38.1%, >=64=61.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.2%, 8=1.6%, 16=0.2%, 32=0.1%, 64=0.1%, >=64=0.0%
issued rwt: total=1644736,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=10.0GiB/s (10.8GB/s), 4096MiB/s-10.0GiB/s (4295MB/s-10.8GB/s), io=100GiB (108GB), run=10012-10012msec
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 12:59 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
Any chance you could cut your job file down to the smallest number of options that still show the problem?
On 31 October 2017 at 19:55, David Hare <david.hare@primarydata.com> wrote:
> Jens,
>
> Still getting the error below when I add a 9th drive:
>
> fio: pid=6040, err=22/file:ioengines.c:333, func=td_io_queue,
> error=Invalid argument
> fio: pid=5880, err=22/file:ioengines.c:333, func=td_io_queue,
> error=Invalid argument
>
> I will follow you direction from this morning and submit a bug report.
>
> -Dave
>
>
> FIO file:
>
> [global]
> ioengine=windowsaio
> blocksize=64k
> direct=1
> iodepth=256
> group_reporting
>
> rw=read
> size=1g
> numjobs=2
>
> thread=8
> time_based
> runtime=10
>
> ;cpus_allowed=1-43
> ;cpus_allowed_policy=shared
> ;verify_async_cpus=shared
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
For example - should you have a trailing : in this line?
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 5101 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 19:50 ` Jens Axboe
2017-10-31 19:55 ` David Hare
@ 2017-10-31 20:23 ` Sitsofe Wheeler
1 sibling, 0 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 20:23 UTC (permalink / raw)
To: Jens Axboe; +Cc: David Hare, fio
I'll see what I can do...
On 31 October 2017 at 19:50, Jens Axboe <axboe@kernel.dk> wrote:
> This is useful information, any chance that could get added
> to the documentation?
>
> On 10/31/2017 01:46 PM, Sitsofe Wheeler wrote:
>> Hi,
>>
>> Yes, AppVeyor makes the builds it does available for download. For
>> example if you look at the latest master version of fio over on
>> https://github.com/axboe/fio/commits/1de80624466405bccdbc4607d71cd249320da3f1
>> , click the green tick and choose Details for AppVeyor you'll end up
>> on https://ci.appveyor.com/project/axboe/fio/build/1.0.380 . You can
>> then select a build like the x86_64 one and the click the Artifacts
>> tab taking you to a page like
>> https://ci.appveyor.com/project/axboe/fio/build/1.0.380/job/ooua768a53epnxeu/artifacts
>> and you can then download the msi on that page.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 20:21 ` David Hare
@ 2017-10-31 20:29 ` Sitsofe Wheeler
2017-10-31 20:46 ` David Hare
2017-10-31 20:47 ` David Hare
0 siblings, 2 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 20:29 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
Can you keep keep the problem size line but keep cutting the job down
until it's as small as possible but still shows the issue? e.g. can
you reduce the number of files (e.g. to F-I) and still see the
problem? Then all the way down to F? How about when group_reporting is
commented out? etc.
Also did you know that thread doesn't control the number of thread -
it just enables the feature (see
http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread
)?
On 31 October 2017 at 20:21, David Hare <david.hare@primarydata.com> wrote:
> I cut the file down and start building it back up. The issue seems to be
> with "size=1g" if I remove the size it runs.
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
> iodepth=64
> group_reporting
>
> rw=read
> ;size=1g
> numjobs=8
>
> thread=8
> time_based
> runtime=10
>
> ;cpus_allowed=1-43
> ;cpus_allowed_policy=shared
> ;verify_async_cpus=shared
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
> ;N\:\\testfile:
>
> ;O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
>
>
> Here the output:
>
> asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T)
> 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=64
> ...
> fio-3.1
> Starting 8 threads
>
> asdf: (groupid=0, jobs=8): err= 0: pid=6112: Tue Oct 31 20:17:43 2017
> read: IOPS=164k, BW=10.0GiB/s (10.8GB/s)(100GiB/10012msec)
> slat (usec): min=8, max=2192, avg=18.69, stdev=16.56
> clat (nsec): min=1407, max=43278k, avg=3069841.35, stdev=5194977.44
> lat (usec): min=132, max=43586, avg=3088.53, stdev=5195.51
> clat percentiles (usec):
> | 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 255], 20.00th=[ 289],
> | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 429],
> | 70.00th=[ 603], 80.00th=[ 6194], 90.00th=[10290], 95.00th=[17695],
> | 99.00th=[18482], 99.50th=[19792], 99.90th=[20841], 99.95th=[21103],
> | 99.99th=[24511]
> bw ( MiB/s): min= 903, max= 1718, per=12.28%, avg=1260.89, stdev=280.99,
> samples=160
> iops : min=14457, max=27496, avg=20173.74, stdev=4495.88, samples=160
> lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=8.89%, 500=58.05%
> lat (usec) : 750=4.82%, 1000=0.92%
> lat (msec) : 2=0.66%, 4=0.42%, 10=14.28%, 20=11.51%, 50=0.45%
> cpu : usr=0.00%, sys=29.97%, ctx=0, majf=0, minf=0
> IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=38.1%, >=64=61.8%
> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> complete : 0=0.0%, 4=98.2%, 8=1.6%, 16=0.2%, 32=0.1%, 64=0.1%, >=64=0.0%
> issued rwt: total=1644736,0,0, short=0,0,0, dropped=0,0,0
> latency : target=0, window=0, percentile=100.00%, depth=64
>
> Run status group 0 (all jobs):
> READ: bw=10.0GiB/s (10.8GB/s), 4096MiB/s-10.0GiB/s (4295MB/s-10.8GB/s),
> io=100GiB (108GB), run=10012-10012msec
>
> -Dave
>
>
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 12:59 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> Any chance you could cut your job file down to the smallest number of
> options that still show the problem?
>
> On 31 October 2017 at 19:55, David Hare <david.hare@primarydata.com> wrote:
>> Jens,
>>
>> Still getting the error below when I add a 9th drive:
>>
>> fio: pid=6040, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>> fio: pid=5880, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>>
>> I will follow you direction from this morning and submit a bug report.
>>
>> -Dave
>>
>>
>> FIO file:
>>
>> [global]
>> ioengine=windowsaio
>> blocksize=64k
>> direct=1
>> iodepth=256
>> group_reporting
>>
>> rw=read
>> size=1g
>> numjobs=2
>>
>> thread=8
>> time_based
>> runtime=10
>>
>> ;cpus_allowed=1-43
>> ;cpus_allowed_policy=shared
>> ;verify_async_cpus=shared
>>
>> [asdf]
>>
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
>
> For example - should you have a trailing : in this line?
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 20:29 ` Sitsofe Wheeler
@ 2017-10-31 20:46 ` David Hare
2017-10-31 21:27 ` Sitsofe Wheeler
2017-10-31 20:47 ` David Hare
1 sibling, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 20:46 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 6128 bytes --]
If I remove - "blocksize=64k" "direct=1" "thread=8" "size=1g" "time_based with runtime=10" or ":P\:\\testfile:" (the 9th drive) the file works.
The below file generates this error:
fio: pid=8404, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid argument
asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333, func=td_io_queue, error=Invalid argument): pid=8404: Tue Oct 31
[global]
ioengine=windowsaio
blocksize=64k
direct=1
thread=8
size=1g
time_based
runtime=10
[asdf]
filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 1:30 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
Can you keep keep the problem size line but keep cutting the job down until it's as small as possible but still shows the issue? e.g. can you reduce the number of files (e.g. to F-I) and still see the problem? Then all the way down to F? How about when group_reporting is commented out? etc.
Also did you know that thread doesn't control the number of thread - it just enables the feature (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread
)?
On 31 October 2017 at 20:21, David Hare <david.hare@primarydata.com> wrote:
> I cut the file down and start building it back up. The issue seems to
> be with "size=1g" if I remove the size it runs.
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
> iodepth=64
> group_reporting
>
> rw=read
> ;size=1g
> numjobs=8
>
> thread=8
> time_based
> runtime=10
>
> ;cpus_allowed=1-43
> ;cpus_allowed_policy=shared
> ;verify_async_cpus=shared
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
> ;N\:\\testfile:
>
> ;O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
>
>
> Here the output:
>
> asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T)
> 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=64 ...
> fio-3.1
> Starting 8 threads
>
> asdf: (groupid=0, jobs=8): err= 0: pid=6112: Tue Oct 31 20:17:43 2017
> read: IOPS=164k, BW=10.0GiB/s (10.8GB/s)(100GiB/10012msec) slat
> (usec): min=8, max=2192, avg=18.69, stdev=16.56 clat (nsec): min=1407,
> max=43278k, avg=3069841.35, stdev=5194977.44 lat (usec): min=132,
> max=43586, avg=3088.53, stdev=5195.51 clat percentiles (usec):
> | 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 255], 20.00th=[ 289],
> | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 429],
> | 70.00th=[ 603], 80.00th=[ 6194], 90.00th=[10290], 95.00th=[17695],
> | 99.00th=[18482], 99.50th=[19792], 99.90th=[20841], 99.95th=[21103],
> | 99.99th=[24511]
> bw ( MiB/s): min= 903, max= 1718, per=12.28%, avg=1260.89,
> stdev=280.99,
> samples=160
> iops : min=14457, max=27496, avg=20173.74, stdev=4495.88, samples=160
> lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=8.89%, 500=58.05% lat
> (usec) : 750=4.82%, 1000=0.92% lat (msec) : 2=0.66%, 4=0.42%,
> 10=14.28%, 20=11.51%, 50=0.45% cpu : usr=0.00%, sys=29.97%, ctx=0,
> majf=0, minf=0 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%,
> 32=38.1%, >=64=61.8% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%,
> 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=98.2%, 8=1.6%,
> 16=0.2%, 32=0.1%, 64=0.1%, >=64=0.0% issued rwt: total=1644736,0,0,
> short=0,0,0, dropped=0,0,0 latency : target=0, window=0,
> percentile=100.00%, depth=64
>
> Run status group 0 (all jobs):
> READ: bw=10.0GiB/s (10.8GB/s), 4096MiB/s-10.0GiB/s
> (4295MB/s-10.8GB/s), io=100GiB (108GB), run=10012-10012msec
>
> -Dave
>
>
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 12:59 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> Any chance you could cut your job file down to the smallest number of
> options that still show the problem?
>
> On 31 October 2017 at 19:55, David Hare <david.hare@primarydata.com> wrote:
>> Jens,
>>
>> Still getting the error below when I add a 9th drive:
>>
>> fio: pid=6040, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>> fio: pid=5880, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>>
>> I will follow you direction from this morning and submit a bug report.
>>
>> -Dave
>>
>>
>> FIO file:
>>
>> [global]
>> ioengine=windowsaio
>> blocksize=64k
>> direct=1
>> iodepth=256
>> group_reporting
>>
>> rw=read
>> size=1g
>> numjobs=2
>>
>> thread=8
>> time_based
>> runtime=10
>>
>> ;cpus_allowed=1-43
>> ;cpus_allowed_policy=shared
>> ;verify_async_cpus=shared
>>
>> [asdf]
>>
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
>
> For example - should you have a trailing : in this line?
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 8066 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 20:29 ` Sitsofe Wheeler
2017-10-31 20:46 ` David Hare
@ 2017-10-31 20:47 ` David Hare
1 sibling, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 20:47 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
Also I have swapped out the P drive with other drives, just to make sure there wasn't an issue with that drive.
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 1:30 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
Can you keep keep the problem size line but keep cutting the job down until it's as small as possible but still shows the issue? e.g. can you reduce the number of files (e.g. to F-I) and still see the problem? Then all the way down to F? How about when group_reporting is commented out? etc.
Also did you know that thread doesn't control the number of thread - it just enables the feature (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-thread
)?
On 31 October 2017 at 20:21, David Hare <david.hare@primarydata.com> wrote:
> I cut the file down and start building it back up. The issue seems to
> be with "size=1g" if I remove the size it runs.
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
> iodepth=64
> group_reporting
>
> rw=read
> ;size=1g
> numjobs=8
>
> thread=8
> time_based
> runtime=10
>
> ;cpus_allowed=1-43
> ;cpus_allowed_policy=shared
> ;verify_async_cpus=shared
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
> ;N\:\\testfile:
>
> ;O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
>
>
> Here the output:
>
> asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T)
> 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=64 ...
> fio-3.1
> Starting 8 threads
>
> asdf: (groupid=0, jobs=8): err= 0: pid=6112: Tue Oct 31 20:17:43 2017
> read: IOPS=164k, BW=10.0GiB/s (10.8GB/s)(100GiB/10012msec) slat
> (usec): min=8, max=2192, avg=18.69, stdev=16.56 clat (nsec): min=1407,
> max=43278k, avg=3069841.35, stdev=5194977.44 lat (usec): min=132,
> max=43586, avg=3088.53, stdev=5195.51 clat percentiles (usec):
> | 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 255], 20.00th=[ 289],
> | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 429],
> | 70.00th=[ 603], 80.00th=[ 6194], 90.00th=[10290], 95.00th=[17695],
> | 99.00th=[18482], 99.50th=[19792], 99.90th=[20841], 99.95th=[21103],
> | 99.99th=[24511]
> bw ( MiB/s): min= 903, max= 1718, per=12.28%, avg=1260.89,
> stdev=280.99,
> samples=160
> iops : min=14457, max=27496, avg=20173.74, stdev=4495.88, samples=160
> lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=8.89%, 500=58.05% lat
> (usec) : 750=4.82%, 1000=0.92% lat (msec) : 2=0.66%, 4=0.42%,
> 10=14.28%, 20=11.51%, 50=0.45% cpu : usr=0.00%, sys=29.97%, ctx=0,
> majf=0, minf=0 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%,
> 32=38.1%, >=64=61.8% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%,
> 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=98.2%, 8=1.6%,
> 16=0.2%, 32=0.1%, 64=0.1%, >=64=0.0% issued rwt: total=1644736,0,0,
> short=0,0,0, dropped=0,0,0 latency : target=0, window=0,
> percentile=100.00%, depth=64
>
> Run status group 0 (all jobs):
> READ: bw=10.0GiB/s (10.8GB/s), 4096MiB/s-10.0GiB/s
> (4295MB/s-10.8GB/s), io=100GiB (108GB), run=10012-10012msec
>
> -Dave
>
>
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 12:59 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> Any chance you could cut your job file down to the smallest number of
> options that still show the problem?
>
> On 31 October 2017 at 19:55, David Hare <david.hare@primarydata.com> wrote:
>> Jens,
>>
>> Still getting the error below when I add a 9th drive:
>>
>> fio: pid=6040, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>> fio: pid=5880, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>>
>> I will follow you direction from this morning and submit a bug report.
>>
>> -Dave
>>
>>
>> FIO file:
>>
>> [global]
>> ioengine=windowsaio
>> blocksize=64k
>> direct=1
>> iodepth=256
>> group_reporting
>>
>> rw=read
>> size=1g
>> numjobs=2
>>
>> thread=8
>> time_based
>> runtime=10
>>
>> ;cpus_allowed=1-43
>> ;cpus_allowed_policy=shared
>> ;verify_async_cpus=shared
>>
>> [asdf]
>>
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
>
> For example - should you have a trailing : in this line?
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 20:46 ` David Hare
@ 2017-10-31 21:27 ` Sitsofe Wheeler
2017-10-31 21:46 ` David Hare
0 siblings, 1 reply; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 21:27 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
On 31 October 2017 at 20:46, David Hare <david.hare@primarydata.com> wrote:
>
> If I remove - "blocksize=64k" "direct=1" "thread=8" "size=1g" "time_based
> with runtime=10" or ":P\:\\testfile:" (the 9th drive) the file works.
>
> The below file generates this error:
> fio: pid=8404, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid
> argument
> asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333, func=td_io_queue,
> error=Invalid argument): pid=8404: Tue Oct 31
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
> thread=8
> size=1g
>
> time_based
> runtime=10
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
"thread=8" should be the same as using "thread" and would be required
on Windows. "P\:\\testfile:" should be "P\:\\testfile" (note the lack
of trailing colon - see the examples in
http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
). time_based and runtime sound required. Perhaps a smaller blocksize
means it takes longer than 10 seconds to hit the problem - presumably
blocksize=128k is still as problematic? I'm also assuming the problem
still happens with size=512m ?
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 21:27 ` Sitsofe Wheeler
@ 2017-10-31 21:46 ` David Hare
2017-10-31 21:54 ` Sitsofe Wheeler
0 siblings, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 21:46 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 2618 bytes --]
It was ok with or without the colon, the size didn’t seem to make a difference, but blocksize did.. see the commented block sizes below.
fio2.fio
[global]
ioengine=windowsaio
;blocksize=64k - error
;blocksize=32k - error
;blocksize=16k - no error
blocksize=16k
direct=1
thread
size=512g
time_based
runtime=10
[asdf]
filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
Results:
Run status group 0 (all jobs):
READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=1413MiB (1481MB), run=10001-10001msec
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 2:28 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
On 31 October 2017 at 20:46, David Hare <david.hare@primarydata.com> wrote:
>
> If I remove - "blocksize=64k" "direct=1" "thread=8" "size=1g"
> "time_based with runtime=10" or ":P\:\\testfile:" (the 9th drive) the file works.
>
> The below file generates this error:
> fio: pid=8404, err=22/file:ioengines.c:333, func=td_io_queue,
> error=Invalid argument
> asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333,
> func=td_io_queue, error=Invalid argument): pid=8404: Tue Oct 31
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
> thread=8
> size=1g
>
> time_based
> runtime=10
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
"thread=8" should be the same as using "thread" and would be required on Windows. "P\:\\testfile:" should be "P\:\\testfile" (note the lack of trailing colon - see the examples in http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
). time_based and runtime sound required. Perhaps a smaller blocksize means it takes longer than 10 seconds to hit the problem - presumably blocksize=128k is still as problematic? I'm also assuming the problem still happens with size=512m ?
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 3582 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 21:46 ` David Hare
@ 2017-10-31 21:54 ` Sitsofe Wheeler
2017-10-31 22:03 ` David Hare
0 siblings, 1 reply; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 21:54 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
Can you add unlink=1 and keep reducing the size parameter (e.g. down
to 128m then down to 16m then down to 4m then down to 1m then down to
512k etc)?
Can you attach the full output that's produced it fails with this reduced job?
IF you are make the problem happen with very little I/O being done
(i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
you can try adding --debug=all to the job and seeing if that offers
any clues as to what the last thing it was doing was?
On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com> wrote:
> It was ok with or without the colon, the size didn’t seem to make a
> difference, but blocksize did.. see the commented block sizes below.
>
> fio2.fio
> [global]
>
> ioengine=windowsaio
>
> ;blocksize=64k - error
> ;blocksize=32k - error
> ;blocksize=16k - no error
>
> blocksize=16k
>
> direct=1
>
> thread
>
> size=512g
>
>
>
> time_based
> runtime=10
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>
> Results:
> Run status group 0 (all jobs):
> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=1413MiB
> (1481MB), run=10001-10001msec
>
>
> -Dave
>
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 2:28 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> On 31 October 2017 at 20:46, David Hare <david.hare@primarydata.com> wrote:
>>
>> If I remove - "blocksize=64k" "direct=1" "thread=8" "size=1g"
>> "time_based with runtime=10" or ":P\:\\testfile:" (the 9th drive) the file
>> works.
>>
>> The below file generates this error:
>> fio: pid=8404, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>> asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333,
>> func=td_io_queue, error=Invalid argument): pid=8404: Tue Oct 31
>>
>>
>> [global]
>>
>> ioengine=windowsaio
>> blocksize=64k
>> direct=1
>>
>> thread=8
>> size=1g
>>
>> time_based
>> runtime=10
>>
>> [asdf]
>>
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
>
> "thread=8" should be the same as using "thread" and would be required on
> Windows. "P\:\\testfile:" should be "P\:\\testfile" (note the lack of
> trailing colon - see the examples in
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
> ). time_based and runtime sound required. Perhaps a smaller blocksize means
> it takes longer than 10 seconds to hit the problem - presumably
> blocksize=128k is still as problematic? I'm also assuming the problem still
> happens with size=512m ?
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 21:54 ` Sitsofe Wheeler
@ 2017-10-31 22:03 ` David Hare
2017-10-31 22:05 ` Sitsofe Wheeler
0 siblings, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 22:03 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 4175 bytes --]
I assume you want me to change the size parameter with a 64k blocksize as everything is working with 16k blocksize?
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 2:54 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
Can you add unlink=1 and keep reducing the size parameter (e.g. down to 128m then down to 16m then down to 4m then down to 1m then down to 512k etc)?
Can you attach the full output that's produced it fails with this reduced job?
IF you are make the problem happen with very little I/O being done (i.e. the job bombs out after doing less than 1MiBytes worth of I/O) you can try adding --debug=all to the job and seeing if that offers any clues as to what the last thing it was doing was?
On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com> wrote:
> It was ok with or without the colon, the size didn’t seem to make a
> difference, but blocksize did.. see the commented block sizes below.
>
> fio2.fio
> [global]
>
> ioengine=windowsaio
>
> ;blocksize=64k - error
> ;blocksize=32k - error
> ;blocksize=16k - no error
>
> blocksize=16k
>
> direct=1
>
> thread
>
> size=512g
>
>
>
> time_based
> runtime=10
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\
> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>
> Results:
> Run status group 0 (all jobs):
> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
> io=1413MiB (1481MB), run=10001-10001msec
>
>
> -Dave
>
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 2:28 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> On 31 October 2017 at 20:46, David Hare <david.hare@primarydata.com> wrote:
>>
>> If I remove - "blocksize=64k" "direct=1" "thread=8" "size=1g"
>> "time_based with runtime=10" or ":P\:\\testfile:" (the 9th drive) the
>> file works.
>>
>> The below file generates this error:
>> fio: pid=8404, err=22/file:ioengines.c:333, func=td_io_queue,
>> error=Invalid argument
>> asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333,
>> func=td_io_queue, error=Invalid argument): pid=8404: Tue Oct 31
>>
>>
>> [global]
>>
>> ioengine=windowsaio
>> blocksize=64k
>> direct=1
>>
>> thread=8
>> size=1g
>>
>> time_based
>> runtime=10
>>
>> [asdf]
>>
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile:
>
> "thread=8" should be the same as using "thread" and would be required
> on Windows. "P\:\\testfile:" should be "P\:\\testfile" (note the lack
> of trailing colon - see the examples in
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filenam
> e ). time_based and runtime sound required. Perhaps a smaller
> blocksize means it takes longer than 10 seconds to hit the problem -
> presumably blocksize=128k is still as problematic? I'm also assuming
> the problem still happens with size=512m ?
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 5726 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 22:03 ` David Hare
@ 2017-10-31 22:05 ` Sitsofe Wheeler
2017-10-31 22:06 ` David Hare
2017-10-31 22:28 ` David Hare
0 siblings, 2 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 22:05 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Yes that's right. Also previously did you mean you had set size=512m
even though you wrote size=512g ?
On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com> wrote:
> I assume you want me to change the size parameter with a 64k blocksize as
> everything is working with 16k blocksize?
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 2:54 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> Can you add unlink=1 and keep reducing the size parameter (e.g. down to 128m
> then down to 16m then down to 4m then down to 1m then down to 512k etc)?
>
> Can you attach the full output that's produced it fails with this reduced
> job?
>
> IF you are make the problem happen with very little I/O being done (i.e. the
> job bombs out after doing less than 1MiBytes worth of I/O) you can try
> adding --debug=all to the job and seeing if that offers any clues as to what
> the last thing it was doing was?
>
> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com> wrote:
>> It was ok with or without the colon, the size didn’t seem to make a
>> difference, but blocksize did.. see the commented block sizes below.
>>
>> fio2.fio
>> [global]
>>
>> ioengine=windowsaio
>>
>> ;blocksize=64k - error
>> ;blocksize=32k - error
>> ;blocksize=16k - no error
>>
>> blocksize=16k
>>
>> direct=1
>>
>> thread
>>
>> size=512g
>>
>>
>>
>> time_based
>> runtime=10
>>
>> [asdf]
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\
>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>
>> Results:
>> Run status group 0 (all jobs):
>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>> io=1413MiB (1481MB), run=10001-10001msec
>>
>>
>> -Dave
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:05 ` Sitsofe Wheeler
@ 2017-10-31 22:06 ` David Hare
2017-10-31 22:14 ` Sitsofe Wheeler
2017-10-31 22:28 ` David Hare
1 sibling, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 22:06 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 2667 bytes --]
Yes.. I made a typo when I changed it back, sorry.
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:05 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Yes that's right. Also previously did you mean you had set size=512m even though you wrote size=512g ?
On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com> wrote:
> I assume you want me to change the size parameter with a 64k blocksize
> as everything is working with 16k blocksize?
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 2:54 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> Can you add unlink=1 and keep reducing the size parameter (e.g. down
> to 128m then down to 16m then down to 4m then down to 1m then down to 512k etc)?
>
> Can you attach the full output that's produced it fails with this
> reduced job?
>
> IF you are make the problem happen with very little I/O being done
> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
> you can try adding --debug=all to the job and seeing if that offers
> any clues as to what the last thing it was doing was?
>
> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com> wrote:
>> It was ok with or without the colon, the size didn’t seem to make a
>> difference, but blocksize did.. see the commented block sizes below.
>>
>> fio2.fio
>> [global]
>>
>> ioengine=windowsaio
>>
>> ;blocksize=64k - error
>> ;blocksize=32k - error
>> ;blocksize=16k - no error
>>
>> blocksize=16k
>>
>> direct=1
>>
>> thread
>>
>> size=512g
>>
>>
>>
>> time_based
>> runtime=10
>>
>> [asdf]
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\
>> \ testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>
>> Results:
>> Run status group 0 (all jobs):
>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>> io=1413MiB (1481MB), run=10001-10001msec
>>
>>
>> -Dave
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 3693 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 22:06 ` David Hare
@ 2017-10-31 22:14 ` Sitsofe Wheeler
2017-10-31 22:33 ` David Hare
2017-10-31 22:45 ` David Hare
0 siblings, 2 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 22:14 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
One idea is that you are seeing the effect of trying to do I/O to a
file that is not a multiple of the blocksize. In theory if you have
size=1g and you have 9 files then each file ends up being 1024**3/9.0
~ 119304647.1111111 big (see
http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
for where this is described). Could it be that Windows goes on to make
a file that is smaller than what we were asking for?
If this theory were right you might see a similar problem if you were
only using 3 files.
On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com> wrote:
> Yes.. I made a typo when I changed it back, sorry.
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:05 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Yes that's right. Also previously did you mean you had set size=512m even
> though you wrote size=512g ?
>
> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com> wrote:
>> I assume you want me to change the size parameter with a 64k blocksize
>> as everything is working with 16k blocksize?
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 2:54 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> Hi,
>>
>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>> to 128m then down to 16m then down to 4m then down to 1m then down to 512k
>> etc)?
>>
>> Can you attach the full output that's produced it fails with this
>> reduced job?
>>
>> IF you are make the problem happen with very little I/O being done
>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>> you can try adding --debug=all to the job and seeing if that offers
>> any clues as to what the last thing it was doing was?
>>
>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>> wrote:
>>> It was ok with or without the colon, the size didn’t seem to make a
>>> difference, but blocksize did.. see the commented block sizes below.
>>>
>>> fio2.fio
>>> [global]
>>>
>>> ioengine=windowsaio
>>>
>>> ;blocksize=64k - error
>>> ;blocksize=32k - error
>>> ;blocksize=16k - no error
>>>
>>> blocksize=16k
>>>
>>> direct=1
>>>
>>> thread
>>>
>>> size=512g
>>>
>>>
>>>
>>> time_based
>>> runtime=10
>>>
>>> [asdf]
>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\
>>> \ testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>
>>> Results:
>>> Run status group 0 (all jobs):
>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>> io=1413MiB (1481MB), run=10001-10001msec
>>>
>>>
>>> -Dave
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:05 ` Sitsofe Wheeler
2017-10-31 22:06 ` David Hare
@ 2017-10-31 22:28 ` David Hare
1 sibling, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 22:28 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 2603 bytes --]
It worked with size=249 but failed with size=250, see attached.
[global]
ioengine=windowsaio
blocksize=64k
direct=1
thread
size=249m
time_based
runtime=10
[asdf]
filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:N\:\\testfile
;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:05 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Yes that's right. Also previously did you mean you had set size=512m
even though you wrote size=512g ?
On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com> wrote:
> I assume you want me to change the size parameter with a 64k blocksize as
> everything is working with 16k blocksize?
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 2:54 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hi,
>
> Can you add unlink=1 and keep reducing the size parameter (e.g. down to 128m
> then down to 16m then down to 4m then down to 1m then down to 512k etc)?
>
> Can you attach the full output that's produced it fails with this reduced
> job?
>
> IF you are make the problem happen with very little I/O being done (i.e. the
> job bombs out after doing less than 1MiBytes worth of I/O) you can try
> adding --debug=all to the job and seeing if that offers any clues as to what
> the last thing it was doing was?
>
> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com> wrote:
>> It was ok with or without the colon, the size didn’t seem to make a
>> difference, but blocksize did.. see the commented block sizes below.
>>
>> fio2.fio
>> [global]
>>
>> ioengine=windowsaio
>>
>> ;blocksize=64k - error
>> ;blocksize=32k - error
>> ;blocksize=16k - no error
>>
>> blocksize=16k
>>
>> direct=1
>>
>> thread
>>
>> size=512g
>>
>>
>>
>> time_based
>> runtime=10
>>
>> [asdf]
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:\\
>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>
>> Results:
>> Run status group 0 (all jobs):
>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>> io=1413MiB (1481MB), run=10001-10001msec
>>
>>
>> -Dave
--
Sitsofe | http://sucs.org/~sits/
[-- Attachment #2: output.249m.txt --]
[-- Type: text/plain, Size: 1638 bytes --]
asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=1
fio-3.1
Starting 1 thread
asdf: (groupid=0, jobs=1): err= 0: pid=1268: Tue Oct 31 22:27:09 2017
read: IOPS=5887, BW=368MiB/s (386MB/s)(3680MiB/10001msec)
slat (usec): min=21, max=2124, avg=23.55, stdev=27.20
clat (usec): min=113, max=1996, avg=144.40, stdev=37.03
lat (usec): min=136, max=2347, avg=167.95, stdev=46.60
clat percentiles (usec):
| 1.00th=[ 116], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 123],
| 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 155], 60.00th=[ 157],
| 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161],
| 99.00th=[ 180], 99.50th=[ 217], 99.90th=[ 351], 99.95th=[ 371],
| 99.99th=[ 1926]
bw ( KiB/s): min=361133, max=379037, per=98.58%, avg=371476.79, stdev=4665.69, samples=19
iops : min= 5642, max= 5922, avg=5803.89, stdev=72.87, samples=19
lat (usec) : 250=99.69%, 500=0.28%
lat (msec) : 2=0.03%
cpu : usr=0.00%, sys=10.00%, ctx=0, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=58885,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=368MiB/s (386MB/s), 368MiB/s-368MiB/s (386MB/s-386MB/s), io=3680MiB (3859MB), run=10001-10001msec
[-- Attachment #3: output.250m.txt --]
[-- Type: text/plain, Size: 1749 bytes --]
asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=1
fio-3.1
Starting 1 thread
fio: pid=3224, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid argument
asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333, func=td_io_queue, error=Invalid argument): pid=3224: Tue Oct 31 22:26:38 2017
read: IOPS=5428, BW=339MiB/s (356MB/s)(250MiB/738msec)
slat (usec): min=21, max=2124, avg=27.32, stdev=100.26
clat (usec): min=117, max=424, avg=154.74, stdev=13.60
lat (usec): min=140, max=2380, avg=182.06, stdev=105.15
clat percentiles (usec):
| 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 153], 20.00th=[ 153],
| 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157],
| 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 163],
| 99.00th=[ 182], 99.50th=[ 227], 99.90th=[ 265], 99.95th=[ 351],
| 99.99th=[ 424]
bw ( KiB/s): min=341996, max=341996, per=98.47%, avg=341996.00, stdev= 0.00, samples=1
iops : min= 5343, max= 5343, avg=5343.00, stdev= 0.00, samples=1
lat (usec) : 250=99.80%, 500=0.17%
cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=4006,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=339MiB/s (356MB/s), 339MiB/s-339MiB/s (356MB/s-356MB/s), io=250MiB (262MB), run=738-738msec
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:14 ` Sitsofe Wheeler
@ 2017-10-31 22:33 ` David Hare
2017-10-31 22:45 ` Sitsofe Wheeler
2017-10-31 22:45 ` David Hare
1 sibling, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 22:33 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1.1: Type: text/plain, Size: 4502 bytes --]
You may be on to something!
I tried 3 drives, got the exact same results. See attached.
[global]
ioengine=windowsaio
blocksize=64k
direct=1
thread
size=250m
time_based
runtime=10
[asdf]
filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:N\:\\testfile
;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:15 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
One idea is that you are seeing the effect of trying to do I/O to a file that is not a multiple of the blocksize. In theory if you have size=1g and you have 9 files then each file ends up being 1024**3/9.0 ~ 119304647.1111111 big (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
for where this is described). Could it be that Windows goes on to make a file that is smaller than what we were asking for?
If this theory were right you might see a similar problem if you were only using 3 files.
On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com> wrote:
> Yes.. I made a typo when I changed it back, sorry.
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:05 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Yes that's right. Also previously did you mean you had set size=512m
> even though you wrote size=512g ?
>
> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com> wrote:
>> I assume you want me to change the size parameter with a 64k
>> blocksize as everything is working with 16k blocksize?
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 2:54 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> Hi,
>>
>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>> to 128m then down to 16m then down to 4m then down to 1m then down to
>> 512k etc)?
>>
>> Can you attach the full output that's produced it fails with this
>> reduced job?
>>
>> IF you are make the problem happen with very little I/O being done
>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>> you can try adding --debug=all to the job and seeing if that offers
>> any clues as to what the last thing it was doing was?
>>
>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>> wrote:
>>> It was ok with or without the colon, the size didn’t seem to make a
>>> difference, but blocksize did.. see the commented block sizes below.
>>>
>>> fio2.fio
>>> [global]
>>>
>>> ioengine=windowsaio
>>>
>>> ;blocksize=64k - error
>>> ;blocksize=32k - error
>>> ;blocksize=16k - no error
>>>
>>> blocksize=16k
>>>
>>> direct=1
>>>
>>> thread
>>>
>>> size=512g
>>>
>>>
>>>
>>> time_based
>>> runtime=10
>>>
>>> [asdf]
>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>> \ \ testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>
>>> Results:
>>> Run status group 0 (all jobs):
>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>> io=1413MiB (1481MB), run=10001-10001msec
>>>
>>>
>>> -Dave
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #1.2: Type: text/html, Size: 6264 bytes --]
[-- Attachment #2: output250m-3drives.txt --]
[-- Type: text/plain, Size: 1774 bytes --]
asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=1
fio-3.1
Starting 1 thread
fio: pid=7820, err=22/file:ioengines.c:333, func=td_io_queue, error=Invalid argument
asdf: (groupid=0, jobs=1): err=22 (file:ioengines.c:333, func=td_io_queue, error=Invalid argument): pid=7820: Tue Oct 31 22:31:43 2017
read: IOPS=6010, BW=376MiB/s (394MB/s)(250MiB/666msec)
slat (usec): min=21, max=2127, avg=24.25, stdev=56.38
clat (usec): min=114, max=1974, avg=140.16, stdev=36.96
lat (usec): min=136, max=2349, avg=164.41, stdev=69.66
clat percentiles (usec):
| 1.00th=[ 116], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 120],
| 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 155],
| 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 163],
| 99.00th=[ 176], 99.50th=[ 212], 99.90th=[ 396], 99.95th=[ 424],
| 99.99th=[ 1975]
bw ( KiB/s): min=383363, max=383363, per=99.68%, avg=383363.00, stdev= 0.00, samples=1
iops : min= 5990, max= 5990, avg=5990.00, stdev= 0.00, samples=1
lat (usec) : 250=99.68%, 500=0.27%
lat (msec) : 2=0.02%
cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=4003,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=250MiB (262MB), run=666-666msec
[-- Attachment #3: output249m-3drives.txt --]
[-- Type: text/plain, Size: 1647 bytes --]
asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=windowsaio, iodepth=1
fio-3.1
Starting 1 thread
asdf: (groupid=0, jobs=1): err= 0: pid=7680: Tue Oct 31 22:31:35 2017
read: IOPS=5771, BW=361MiB/s (378MB/s)(3608MiB/10001msec)
slat (usec): min=21, max=2141, avg=23.22, stdev=18.24
clat (usec): min=116, max=2023, avg=147.97, stdev=36.77
lat (usec): min=138, max=2401, avg=171.20, stdev=41.32
clat percentiles (usec):
| 1.00th=[ 120], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 125],
| 30.00th=[ 128], 40.00th=[ 155], 50.00th=[ 155], 60.00th=[ 157],
| 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161],
| 99.00th=[ 180], 99.50th=[ 219], 99.90th=[ 371], 99.95th=[ 383],
| 99.99th=[ 1942]
bw ( KiB/s): min=354375, max=374280, per=98.72%, avg=364676.74, stdev=5516.54, samples=19
iops : min= 5537, max= 5848, avg=5697.63, stdev=86.20, samples=19
lat (usec) : 250=99.70%, 500=0.27%
lat (msec) : 2=0.03%, 4=0.01%
cpu : usr=0.00%, sys=10.00%, ctx=0, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=57723,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=361MiB/s (378MB/s), 361MiB/s-361MiB/s (378MB/s-378MB/s), io=3608MiB (3783MB), run=10001-10001msec
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:14 ` Sitsofe Wheeler
2017-10-31 22:33 ` David Hare
@ 2017-10-31 22:45 ` David Hare
1 sibling, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 22:45 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 4286 bytes --]
I am pulling the windows drive from the computer and installing centos to run some FIO tests. I'll wait for a few minutes to see if you want me to test anything else on windows before I start.
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:15 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
One idea is that you are seeing the effect of trying to do I/O to a file that is not a multiple of the blocksize. In theory if you have size=1g and you have 9 files then each file ends up being 1024**3/9.0 ~ 119304647.1111111 big (see http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
for where this is described). Could it be that Windows goes on to make a file that is smaller than what we were asking for?
If this theory were right you might see a similar problem if you were only using 3 files.
On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com> wrote:
> Yes.. I made a typo when I changed it back, sorry.
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:05 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Yes that's right. Also previously did you mean you had set size=512m
> even though you wrote size=512g ?
>
> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com> wrote:
>> I assume you want me to change the size parameter with a 64k
>> blocksize as everything is working with 16k blocksize?
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 2:54 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> Hi,
>>
>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>> to 128m then down to 16m then down to 4m then down to 1m then down to
>> 512k etc)?
>>
>> Can you attach the full output that's produced it fails with this
>> reduced job?
>>
>> IF you are make the problem happen with very little I/O being done
>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>> you can try adding --debug=all to the job and seeing if that offers
>> any clues as to what the last thing it was doing was?
>>
>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>> wrote:
>>> It was ok with or without the colon, the size didn’t seem to make a
>>> difference, but blocksize did.. see the commented block sizes below.
>>>
>>> fio2.fio
>>> [global]
>>>
>>> ioengine=windowsaio
>>>
>>> ;blocksize=64k - error
>>> ;blocksize=32k - error
>>> ;blocksize=16k - no error
>>>
>>> blocksize=16k
>>>
>>> direct=1
>>>
>>> thread
>>>
>>> size=512g
>>>
>>>
>>>
>>> time_based
>>> runtime=10
>>>
>>> [asdf]
>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>> \ \ testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>
>>> Results:
>>> Run status group 0 (all jobs):
>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>> io=1413MiB (1481MB), run=10001-10001msec
>>>
>>>
>>> -Dave
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 5936 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 22:33 ` David Hare
@ 2017-10-31 22:45 ` Sitsofe Wheeler
2017-10-31 22:51 ` David Hare
` (2 more replies)
0 siblings, 3 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 22:45 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hmm, I can't reproduce the problem here but still it's curious. Do you
get the same problem with one file and if so after the job runs can
you check what size the file was?
Is there anything special about the filesystems? Are they local NTFS
and quite small (less than 16TBytes)? Do they have a custom cluster
size?
On 31 October 2017 at 22:33, David Hare <david.hare@primarydata.com> wrote:
> You may be on to something!
>
> I tried 3 drives, got the exact same results. See attached.
>
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
>
> thread
> size=250m
>
>
>
> time_based
> runtime=10
>
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
>
> ;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:N\:\\testfile
> ;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
> -Dave
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:15 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> One idea is that you are seeing the effect of trying to do I/O to a file
> that is not a multiple of the blocksize. In theory if you have size=1g and
> you have 9 files then each file ends up being 1024**3/9.0 ~
> 119304647.1111111 big (see
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filename
> for where this is described). Could it be that Windows goes on to make a
> file that is smaller than what we were asking for?
>
> If this theory were right you might see a similar problem if you were only
> using 3 files.
>
> On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com> wrote:
>> Yes.. I made a typo when I changed it back, sorry.
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 3:05 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> Yes that's right. Also previously did you mean you had set size=512m
>> even though you wrote size=512g ?
>>
>> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com>
>> wrote:
>>> I assume you want me to change the size parameter with a 64k
>>> blocksize as everything is working with 16k blocksize?
>>>
>>> -----Original Message-----
>>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>>> Sent: Tuesday, October 31, 2017 2:54 PM
>>> To: David Hare <david.hare@primarydata.com>
>>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>>> Subject: Re: FIO windows
>>>
>>> Hi,
>>>
>>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>>> to 128m then down to 16m then down to 4m then down to 1m then down to
>>> 512k etc)?
>>>
>>> Can you attach the full output that's produced it fails with this
>>> reduced job?
>>>
>>> IF you are make the problem happen with very little I/O being done
>>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>>> you can try adding --debug=all to the job and seeing if that offers
>>> any clues as to what the last thing it was doing was?
>>>
>>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>>> wrote:
>>>> It was ok with or without the colon, the size didn’t seem to make a
>>>> difference, but blocksize did.. see the commented block sizes below.
>>>>
>>>> fio2.fio
>>>> [global]
>>>>
>>>> ioengine=windowsaio
>>>>
>>>> ;blocksize=64k - error
>>>> ;blocksize=32k - error
>>>> ;blocksize=16k - no error
>>>>
>>>> blocksize=16k
>>>>
>>>> direct=1
>>>>
>>>> thread
>>>>
>>>> size=512g
>>>>
>>>>
>>>>
>>>> time_based
>>>> runtime=10
>>>>
>>>> [asdf]
>>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>>> \ \ testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>>
>>>> Results:
>>>> Run status group 0 (all jobs):
>>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>>> io=1413MiB (1481MB), run=10001-10001msec
>>>>
>>>>
>>>> -Dave
>>
>> --
>> Sitsofe | http://sucs.org/~sits/
>>
>>
>> Disclaimer
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and
>> others authorized to receive it. If you are not the recipient, you are
>> hereby notified that any disclosure, copying, distribution or taking
>> action in relation of the contents of this information is strictly
>> prohibited and may be unlawful.
>
>
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:45 ` Sitsofe Wheeler
@ 2017-10-31 22:51 ` David Hare
2017-10-31 23:06 ` Sitsofe Wheeler
2017-10-31 22:56 ` David Hare
2017-10-31 23:07 ` David Hare
2 siblings, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 22:51 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
Not sure what you mean by one file? The drives are local NTFS formatted with Allocation unit size of 64k, they range in size from with are 186gb - 372gb.
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:45 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hmm, I can't reproduce the problem here but still it's curious. Do you get the same problem with one file and if so after the job runs can you check what size the file was?
Is there anything special about the filesystems? Are they local NTFS and quite small (less than 16TBytes)? Do they have a custom cluster size?
On 31 October 2017 at 22:33, David Hare <david.hare@primarydata.com> wrote:
> You may be on to something!
>
> I tried 3 drives, got the exact same results. See attached.
>
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
>
> thread
> size=250m
>
>
>
> time_based
> runtime=10
>
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
>
> ;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfil
> e:N\:\\testfile ;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
> -Dave
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:15 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> One idea is that you are seeing the effect of trying to do I/O to a
> file that is not a multiple of the blocksize. In theory if you have
> size=1g and you have 9 files then each file ends up being 1024**3/9.0
> ~
> 119304647.1111111 big (see
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filenam
> e for where this is described). Could it be that Windows goes on to
> make a file that is smaller than what we were asking for?
>
> If this theory were right you might see a similar problem if you were
> only using 3 files.
>
> On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com> wrote:
>> Yes.. I made a typo when I changed it back, sorry.
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 3:05 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> Yes that's right. Also previously did you mean you had set size=512m
>> even though you wrote size=512g ?
>>
>> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com>
>> wrote:
>>> I assume you want me to change the size parameter with a 64k
>>> blocksize as everything is working with 16k blocksize?
>>>
>>> -----Original Message-----
>>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>>> Sent: Tuesday, October 31, 2017 2:54 PM
>>> To: David Hare <david.hare@primarydata.com>
>>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>>> Subject: Re: FIO windows
>>>
>>> Hi,
>>>
>>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>>> to 128m then down to 16m then down to 4m then down to 1m then down
>>> to 512k etc)?
>>>
>>> Can you attach the full output that's produced it fails with this
>>> reduced job?
>>>
>>> IF you are make the problem happen with very little I/O being done
>>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>>> you can try adding --debug=all to the job and seeing if that offers
>>> any clues as to what the last thing it was doing was?
>>>
>>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>>> wrote:
>>>> It was ok with or without the colon, the size didn’t seem to make a
>>>> difference, but blocksize did.. see the commented block sizes below.
>>>>
>>>> fio2.fio
>>>> [global]
>>>>
>>>> ioengine=windowsaio
>>>>
>>>> ;blocksize=64k - error
>>>> ;blocksize=32k - error
>>>> ;blocksize=16k - no error
>>>>
>>>> blocksize=16k
>>>>
>>>> direct=1
>>>>
>>>> thread
>>>>
>>>> size=512g
>>>>
>>>>
>>>>
>>>> time_based
>>>> runtime=10
>>>>
>>>> [asdf]
>>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>>> \ \
>>>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>>
>>>> Results:
>>>> Run status group 0 (all jobs):
>>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>>> io=1413MiB (1481MB), run=10001-10001msec
>>>>
>>>>
>>>> -Dave
>>
>> --
>> Sitsofe | http://sucs.org/~sits/
>>
>>
>> Disclaimer
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and
>> others authorized to receive it. If you are not the recipient, you
>> are hereby notified that any disclosure, copying, distribution or
>> taking action in relation of the contents of this information is
>> strictly prohibited and may be unlawful.
>
>
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:45 ` Sitsofe Wheeler
2017-10-31 22:51 ` David Hare
@ 2017-10-31 22:56 ` David Hare
2017-10-31 23:07 ` David Hare
2 siblings, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 22:56 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1.1: Type: text/plain, Size: 6694 bytes --]
3 files, size=250mb - failed
[cid:image001.png@01D35260.DB1857A0]
3 files, size=249mb – worked
[cid:image002.png@01D35260.DB1857A0]
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:45 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hmm, I can't reproduce the problem here but still it's curious. Do you get the same problem with one file and if so after the job runs can you check what size the file was?
Is there anything special about the filesystems? Are they local NTFS and quite small (less than 16TBytes)? Do they have a custom cluster size?
On 31 October 2017 at 22:33, David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>> wrote:
> You may be on to something!
>
> I tried 3 drives, got the exact same results. See attached.
>
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
>
> thread
> size=250m
>
>
>
> time_based
> runtime=10
>
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
>
> ;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfil
> e:N\:\\testfile ;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
> -Dave
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:15 PM
> To: David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>>
> Cc: Jens Axboe <axboe@kernel.dk<mailto:axboe@kernel.dk>>; fio@vger.kernel.org<mailto:fio@vger.kernel.org>
> Subject: Re: FIO windows
>
> One idea is that you are seeing the effect of trying to do I/O to a
> file that is not a multiple of the blocksize. In theory if you have
> size=1g and you have 9 files then each file ends up being 1024**3/9.0
> ~
> 119304647.1111111 big (see
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filenam
> e for where this is described). Could it be that Windows goes on to
> make a file that is smaller than what we were asking for?
>
> If this theory were right you might see a similar problem if you were
> only using 3 files.
>
> On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>> wrote:
>> Yes.. I made a typo when I changed it back, sorry.
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 3:05 PM
>> To: David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>>
>> Cc: Jens Axboe <axboe@kernel.dk<mailto:axboe@kernel.dk>>; fio@vger.kernel.org<mailto:fio@vger.kernel.org>
>> Subject: Re: FIO windows
>>
>> Yes that's right. Also previously did you mean you had set size=512m
>> even though you wrote size=512g ?
>>
>> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>>
>> wrote:
>>> I assume you want me to change the size parameter with a 64k
>>> blocksize as everything is working with 16k blocksize?
>>>
>>> -----Original Message-----
>>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>>> Sent: Tuesday, October 31, 2017 2:54 PM
>>> To: David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>>
>>> Cc: Jens Axboe <axboe@kernel.dk<mailto:axboe@kernel.dk>>; fio@vger.kernel.org<mailto:fio@vger.kernel.org>
>>> Subject: Re: FIO windows
>>>
>>> Hi,
>>>
>>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>>> to 128m then down to 16m then down to 4m then down to 1m then down
>>> to 512k etc)?
>>>
>>> Can you attach the full output that's produced it fails with this
>>> reduced job?
>>>
>>> IF you are make the problem happen with very little I/O being done
>>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>>> you can try adding --debug=all to the job and seeing if that offers
>>> any clues as to what the last thing it was doing was?
>>>
>>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com<mailto:david.hare@primarydata.com>>
>>> wrote:
>>>> It was ok with or without the colon, the size didn’t seem to make a
>>>> difference, but blocksize did.. see the commented block sizes below.
>>>>
>>>> fio2.fio
>>>> [global]
>>>>
>>>> ioengine=windowsaio
>>>>
>>>> ;blocksize=64k - error
>>>> ;blocksize=32k - error
>>>> ;blocksize=16k - no error
>>>>
>>>> blocksize=16k
>>>>
>>>> direct=1
>>>>
>>>> thread
>>>>
>>>> size=512g
>>>>
>>>>
>>>>
>>>> time_based
>>>> runtime=10
>>>>
>>>> [asdf]
>>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>>> \ \
>>>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>>
>>>> Results:
>>>> Run status group 0 (all jobs):
>>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>>> io=1413MiB (1481MB), run=10001-10001msec
>>>>
>>>>
>>>> -Dave
>>
>> --
>> Sitsofe | http://sucs.org/~sits/
>>
>>
>> Disclaimer
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and
>> others authorized to receive it. If you are not the recipient, you
>> are hereby notified that any disclosure, copying, distribution or
>> taking action in relation of the contents of this information is
>> strictly prohibited and may be unlawful.
>
>
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #1.2: Type: text/html, Size: 18764 bytes --]
[-- Attachment #2: image001.png --]
[-- Type: image/png, Size: 4995 bytes --]
[-- Attachment #3: image002.png --]
[-- Type: image/png, Size: 4894 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 22:51 ` David Hare
@ 2017-10-31 23:06 ` Sitsofe Wheeler
2017-10-31 23:09 ` David Hare
2017-11-01 16:44 ` Jens Axboe
0 siblings, 2 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 23:06 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
OK I see the problem too when using this on Windows:
$ ./fio --size=1g --filename
fio1.tmp:fio2.tmp:fio3.tmp:fio4.tmp:fio5.tmp:fio6.tmp:fio7.tmp:fio8.tmp:fio9.tmp
--name=go --create_only=1
$ ./fio --size=250m --filename fio1.tmp:fio2.tmp:fio3.tmp --name=go
--direct=1 --bs=64k --runtime=1m --time_based --thread
Using --debug=all shows the following:
io 2796 getevents: 1
io 2796 io complete: io_u 0000000007BD8F00:
off=87359488/len=65536/ddir=0io 2796 /fio3.tmpio 2796
file 2796 put file fio3.tmp, ref=2
file 2796 trying file fio1.tmp 11
file 2796 goodf=1, badf=2, ff=11
file 2796 get_next_file_rr: 00007FF5FEEA4610
file 2796 get_next_file: 00007FF5FEEA4610 [fio1.tmp]
file 2796 get file fio1.tmp, ref=1
io 2796 fill_io_u: io_u 0000000007BD8F00:
off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
io 2796 prep: io_u 0000000007BD8F00:
off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
io 2796 queue: io_u 0000000007BD8F00:
off=21846/len=65536/ddir=0io 2796 /fio1.tmpio
helperthread 2796
2796 since_ss: 0, next_ss: 1000, next_log: 495, msec_to_next_event: 136
file 2796 put file fio1.tmp, ref=2
io 2796 io_u_queued_complete: min=0
io 2796 getevents: 0
process 2796 pid=2936: runstate RUNNING -> FINISHING
fio: pid=2936, err=22/file:ioengines.c:335, func=td_io_queue,
error=Invalid argument
off=21846 is not a multiple of 512.
$ du -b fio*tmp
119304647 fio1.tmp
119304647 fio2.tmp
119304647 fio3.tmp
119304647 fio4.tmp
119304647 fio5.tmp
119304647 fio6.tmp
119304647 fio7.tmp
119304647 fio8.tmp
119304647 fio9.tmp
I think this is all hinting at a bug in fio (perhaps when working with
multiple pre-existing files that are bigger than size specified) but I
think I'm done for the day...
On 31 October 2017 at 22:51, David Hare <david.hare@primarydata.com> wrote:
>
> Not sure what you mean by one file? The drives are local NTFS formatted with
> Allocation unit size of 64k, they range in size from with are 186gb - 372gb.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 22:45 ` Sitsofe Wheeler
2017-10-31 22:51 ` David Hare
2017-10-31 22:56 ` David Hare
@ 2017-10-31 23:07 ` David Hare
2017-10-31 23:13 ` Sitsofe Wheeler
2017-10-31 23:27 ` Sitsofe Wheeler
2 siblings, 2 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 23:07 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 6543 bytes --]
Just a little back story, I am testing a 88core Dell R930 with 30 NVMe drives, (6) Kingston PCIe NVMe each has 4 m.2 drive ie. 24 drives. Then there is (6) 2.5" NVMe u.2 drives.
I started out with my usual tools benchmark tools and was not able to get anything above 36GB/s and I had to run the multiple version of the tools at the same time.
I found Diskspd for windows and it has a lot of control with threads and affinity I was able to reach 52.7GB/s which is pretty close to theoretical max of the drives of about 56GB/s.
I am trying to achieve the same results or close with FIO so I can compares Windows with Centos and ultimately our version of Redhat. I am having issue tuning FIO much above 14GB/s
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 3:45 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hmm, I can't reproduce the problem here but still it's curious. Do you get the same problem with one file and if so after the job runs can you check what size the file was?
Is there anything special about the filesystems? Are they local NTFS and quite small (less than 16TBytes)? Do they have a custom cluster size?
On 31 October 2017 at 22:33, David Hare <david.hare@primarydata.com> wrote:
> You may be on to something!
>
> I tried 3 drives, got the exact same results. See attached.
>
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
>
> thread
> size=250m
>
>
>
> time_based
> runtime=10
>
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
>
> ;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfil
> e:N\:\\testfile ;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
> -Dave
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:15 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> One idea is that you are seeing the effect of trying to do I/O to a
> file that is not a multiple of the blocksize. In theory if you have
> size=1g and you have 9 files then each file ends up being 1024**3/9.0
> ~
> 119304647.1111111 big (see
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filenam
> e for where this is described). Could it be that Windows goes on to
> make a file that is smaller than what we were asking for?
>
> If this theory were right you might see a similar problem if you were
> only using 3 files.
>
> On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com> wrote:
>> Yes.. I made a typo when I changed it back, sorry.
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 3:05 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> Yes that's right. Also previously did you mean you had set size=512m
>> even though you wrote size=512g ?
>>
>> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com>
>> wrote:
>>> I assume you want me to change the size parameter with a 64k
>>> blocksize as everything is working with 16k blocksize?
>>>
>>> -----Original Message-----
>>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>>> Sent: Tuesday, October 31, 2017 2:54 PM
>>> To: David Hare <david.hare@primarydata.com>
>>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>>> Subject: Re: FIO windows
>>>
>>> Hi,
>>>
>>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>>> to 128m then down to 16m then down to 4m then down to 1m then down
>>> to 512k etc)?
>>>
>>> Can you attach the full output that's produced it fails with this
>>> reduced job?
>>>
>>> IF you are make the problem happen with very little I/O being done
>>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>>> you can try adding --debug=all to the job and seeing if that offers
>>> any clues as to what the last thing it was doing was?
>>>
>>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>>> wrote:
>>>> It was ok with or without the colon, the size didn’t seem to make a
>>>> difference, but blocksize did.. see the commented block sizes below.
>>>>
>>>> fio2.fio
>>>> [global]
>>>>
>>>> ioengine=windowsaio
>>>>
>>>> ;blocksize=64k - error
>>>> ;blocksize=32k - error
>>>> ;blocksize=16k - no error
>>>>
>>>> blocksize=16k
>>>>
>>>> direct=1
>>>>
>>>> thread
>>>>
>>>> size=512g
>>>>
>>>>
>>>>
>>>> time_based
>>>> runtime=10
>>>>
>>>> [asdf]
>>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>>> \ \
>>>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>>
>>>> Results:
>>>> Run status group 0 (all jobs):
>>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>>> io=1413MiB (1481MB), run=10001-10001msec
>>>>
>>>>
>>>> -Dave
>>
>> --
>> Sitsofe | http://sucs.org/~sits/
>>
>>
>> Disclaimer
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and
>> others authorized to receive it. If you are not the recipient, you
>> are hereby notified that any disclosure, copying, distribution or
>> taking action in relation of the contents of this information is
>> strictly prohibited and may be unlawful.
>
>
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and
> others authorized to receive it. If you are not the recipient, you are
> hereby notified that any disclosure, copying, distribution or taking
> action in relation of the contents of this information is strictly
> prohibited and may be unlawful.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 9088 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 23:06 ` Sitsofe Wheeler
@ 2017-10-31 23:09 ` David Hare
2017-10-31 23:19 ` Sitsofe Wheeler
2017-11-01 16:44 ` Jens Axboe
1 sibling, 1 reply; 38+ messages in thread
From: David Hare @ 2017-10-31 23:09 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 2929 bytes --]
Ok, its easy for me to switch OS's let me know, when you have something ready.
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 4:06 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
OK I see the problem too when using this on Windows:
$ ./fio --size=1g --filename
fio1.tmp:fio2.tmp:fio3.tmp:fio4.tmp:fio5.tmp:fio6.tmp:fio7.tmp:fio8.tmp:fio9.tmp
--name=go --create_only=1
$ ./fio --size=250m --filename fio1.tmp:fio2.tmp:fio3.tmp --name=go
--direct=1 --bs=64k --runtime=1m --time_based --thread
Using --debug=all shows the following:
io 2796 getevents: 1
io 2796 io complete: io_u 0000000007BD8F00:
off=87359488/len=65536/ddir=0io 2796 /fio3.tmpio 2796
file 2796 put file fio3.tmp, ref=2
file 2796 trying file fio1.tmp 11
file 2796 goodf=1, badf=2, ff=11
file 2796 get_next_file_rr: 00007FF5FEEA4610
file 2796 get_next_file: 00007FF5FEEA4610 [fio1.tmp]
file 2796 get file fio1.tmp, ref=1
io 2796 fill_io_u: io_u 0000000007BD8F00:
off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
io 2796 prep: io_u 0000000007BD8F00:
off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
io 2796 queue: io_u 0000000007BD8F00:
off=21846/len=65536/ddir=0io 2796 /fio1.tmpio
helperthread 2796
2796 since_ss: 0, next_ss: 1000, next_log: 495, msec_to_next_event: 136
file 2796 put file fio1.tmp, ref=2
io 2796 io_u_queued_complete: min=0
io 2796 getevents: 0
process 2796 pid=2936: runstate RUNNING -> FINISHING
fio: pid=2936, err=22/file:ioengines.c:335, func=td_io_queue, error=Invalid argument
off=21846 is not a multiple of 512.
$ du -b fio*tmp
119304647 fio1.tmp
119304647 fio2.tmp
119304647 fio3.tmp
119304647 fio4.tmp
119304647 fio5.tmp
119304647 fio6.tmp
119304647 fio7.tmp
119304647 fio8.tmp
119304647 fio9.tmp
I think this is all hinting at a bug in fio (perhaps when working with multiple pre-existing files that are bigger than size specified) but I think I'm done for the day...
On 31 October 2017 at 22:51, David Hare <david.hare@primarydata.com> wrote:
>
> Not sure what you mean by one file? The drives are local NTFS
> formatted with Allocation unit size of 64k, they range in size from with are 186gb - 372gb.
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 3572 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 23:07 ` David Hare
@ 2017-10-31 23:13 ` Sitsofe Wheeler
2017-10-31 23:27 ` Sitsofe Wheeler
1 sibling, 0 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 23:13 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
You might find that deleting any pre-existing testfile files before
running fio works around the problem...
On 31 October 2017 at 23:07, David Hare <david.hare@primarydata.com> wrote:
> Just a little back story, I am testing a 88core Dell R930 with 30 NVMe
> drives, (6) Kingston PCIe NVMe each has 4 m.2 drive ie. 24 drives. Then
> there is (6) 2.5" NVMe u.2 drives.
>
> I started out with my usual tools benchmark tools and was not able to get
> anything above 36GB/s and I had to run the multiple version of the tools at
> the same time.
>
> I found Diskspd for windows and it has a lot of control with threads and
> affinity I was able to reach 52.7GB/s which is pretty close to theoretical
> max of the drives of about 56GB/s.
>
> I am trying to achieve the same results or close with FIO so I can compares
> Windows with Centos and ultimately our version of Redhat. I am having issue
> tuning FIO much above 14GB/s
>
> -Dave
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 3:45 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> Hmm, I can't reproduce the problem here but still it's curious. Do you get
> the same problem with one file and if so after the job runs can you check
> what size the file was?
>
> Is there anything special about the filesystems? Are they local NTFS and
> quite small (less than 16TBytes)? Do they have a custom cluster size?
>
> On 31 October 2017 at 22:33, David Hare <david.hare@primarydata.com> wrote:
>> You may be on to something!
>>
>> I tried 3 drives, got the exact same results. See attached.
>>
>>
>>
>> [global]
>>
>> ioengine=windowsaio
>> blocksize=64k
>> direct=1
>>
>>
>> thread
>> size=250m
>>
>>
>>
>> time_based
>> runtime=10
>>
>>
>> [asdf]
>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
>>
>> ;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfil
>> e:N\:\\testfile ;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
>>
>>
>> -Dave
>>
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>> Sent: Tuesday, October 31, 2017 3:15 PM
>> To: David Hare <david.hare@primarydata.com>
>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>> Subject: Re: FIO windows
>>
>> One idea is that you are seeing the effect of trying to do I/O to a
>> file that is not a multiple of the blocksize. In theory if you have
>> size=1g and you have 9 files then each file ends up being 1024**3/9.0
>> ~
>> 119304647.1111111 big (see
>> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filenam
>> e for where this is described). Could it be that Windows goes on to
>> make a file that is smaller than what we were asking for?
>>
>> If this theory were right you might see a similar problem if you were
>> only using 3 files.
>>
>> On 31 October 2017 at 22:06, David Hare <david.hare@primarydata.com>
>> wrote:
>>> Yes.. I made a typo when I changed it back, sorry.
>>>
>>> -----Original Message-----
>>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>>> Sent: Tuesday, October 31, 2017 3:05 PM
>>> To: David Hare <david.hare@primarydata.com>
>>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>>> Subject: Re: FIO windows
>>>
>>> Yes that's right. Also previously did you mean you had set size=512m
>>> even though you wrote size=512g ?
>>>
>>> On 31 October 2017 at 22:03, David Hare <david.hare@primarydata.com>
>>> wrote:
>>>> I assume you want me to change the size parameter with a 64k
>>>> blocksize as everything is working with 16k blocksize?
>>>>
>>>> -----Original Message-----
>>>> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
>>>> Sent: Tuesday, October 31, 2017 2:54 PM
>>>> To: David Hare <david.hare@primarydata.com>
>>>> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
>>>> Subject: Re: FIO windows
>>>>
>>>> Hi,
>>>>
>>>> Can you add unlink=1 and keep reducing the size parameter (e.g. down
>>>> to 128m then down to 16m then down to 4m then down to 1m then down
>>>> to 512k etc)?
>>>>
>>>> Can you attach the full output that's produced it fails with this
>>>> reduced job?
>>>>
>>>> IF you are make the problem happen with very little I/O being done
>>>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O)
>>>> you can try adding --debug=all to the job and seeing if that offers
>>>> any clues as to what the last thing it was doing was?
>>>>
>>>> On 31 October 2017 at 21:46, David Hare <david.hare@primarydata.com>
>>>> wrote:
>>>>> It was ok with or without the colon, the size didn’t seem to make a
>>>>> difference, but blocksize did.. see the commented block sizes below.
>>>>>
>>>>> fio2.fio
>>>>> [global]
>>>>>
>>>>> ioengine=windowsaio
>>>>>
>>>>> ;blocksize=64k - error
>>>>> ;blocksize=32k - error
>>>>> ;blocksize=16k - no error
>>>>>
>>>>> blocksize=16k
>>>>>
>>>>> direct=1
>>>>>
>>>>> thread
>>>>>
>>>>> size=512g
>>>>>
>>>>>
>>>>>
>>>>> time_based
>>>>> runtime=10
>>>>>
>>>>> [asdf]
>>>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>>>> \ \
>>>>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>>>
>>>>> Results:
>>>>> Run status group 0 (all jobs):
>>>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s),
>>>>> io=1413MiB (1481MB), run=10001-10001msec
>>>>>
>>>>>
>>>>> -Dave
>>>
>>> --
>>> Sitsofe | http://sucs.org/~sits/
>>>
>>>
>>> Disclaimer
>>>
>>> The information contained in this communication from the sender is
>>> confidential. It is intended solely for use by the recipient and
>>> others authorized to receive it. If you are not the recipient, you
>>> are hereby notified that any disclosure, copying, distribution or
>>> taking action in relation of the contents of this information is
>>> strictly prohibited and may be unlawful.
>>
>>
>>
>> --
>> Sitsofe | http://sucs.org/~sits/
>>
>>
>> Disclaimer
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and
>> others authorized to receive it. If you are not the recipient, you are
>> hereby notified that any disclosure, copying, distribution or taking
>> action in relation of the contents of this information is strictly
>> prohibited and may be unlawful.
>
>
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 23:09 ` David Hare
@ 2017-10-31 23:19 ` Sitsofe Wheeler
0 siblings, 0 replies; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 23:19 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
Feel free to switch - I think we've got enough to investigate the bug
you were seeing.
On 31 October 2017 at 23:09, David Hare <david.hare@primarydata.com> wrote:
> Ok, its easy for me to switch OS's let me know, when you have something
> ready.
> -Dave
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Tuesday, October 31, 2017 4:06 PM
> To: David Hare <david.hare@primarydata.com>
> Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
> Subject: Re: FIO windows
>
> OK I see the problem too when using this on Windows:
> $ ./fio --size=1g --filename
> fio1.tmp:fio2.tmp:fio3.tmp:fio4.tmp:fio5.tmp:fio6.tmp:fio7.tmp:fio8.tmp:fio9.tmp
> --name=go --create_only=1
> $ ./fio --size=250m --filename fio1.tmp:fio2.tmp:fio3.tmp --name=go
> --direct=1 --bs=64k --runtime=1m --time_based --thread
>
> Using --debug=all shows the following:
> io 2796 getevents: 1
> io 2796 io complete: io_u 0000000007BD8F00:
> off=87359488/len=65536/ddir=0io 2796 /fio3.tmpio 2796
> file 2796 put file fio3.tmp, ref=2
> file 2796 trying file fio1.tmp 11
> file 2796 goodf=1, badf=2, ff=11
> file 2796 get_next_file_rr: 00007FF5FEEA4610
> file 2796 get_next_file: 00007FF5FEEA4610 [fio1.tmp]
> file 2796 get file fio1.tmp, ref=1
> io 2796 fill_io_u: io_u 0000000007BD8F00:
> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
> io 2796 prep: io_u 0000000007BD8F00:
> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
> io 2796 queue: io_u 0000000007BD8F00:
> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio
> helperthread 2796
> 2796 since_ss: 0, next_ss: 1000, next_log: 495, msec_to_next_event: 136
> file 2796 put file fio1.tmp, ref=2
> io 2796 io_u_queued_complete: min=0
> io 2796 getevents: 0
> process 2796 pid=2936: runstate RUNNING -> FINISHING
> fio: pid=2936, err=22/file:ioengines.c:335, func=td_io_queue, error=Invalid
> argument
>
> off=21846 is not a multiple of 512.
>
> $ du -b fio*tmp
> 119304647 fio1.tmp
> 119304647 fio2.tmp
> 119304647 fio3.tmp
> 119304647 fio4.tmp
> 119304647 fio5.tmp
> 119304647 fio6.tmp
> 119304647 fio7.tmp
> 119304647 fio8.tmp
> 119304647 fio9.tmp
>
> I think this is all hinting at a bug in fio (perhaps when working with
> multiple pre-existing files that are bigger than size specified) but I think
> I'm done for the day...
>
> On 31 October 2017 at 22:51, David Hare <david.hare@primarydata.com> wrote:
>>
>> Not sure what you mean by one file? The drives are local NTFS
>> formatted with Allocation unit size of 64k, they range in size from with
>> are 186gb - 372gb.
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 23:07 ` David Hare
2017-10-31 23:13 ` Sitsofe Wheeler
@ 2017-10-31 23:27 ` Sitsofe Wheeler
2017-10-31 23:39 ` David Hare
1 sibling, 1 reply; 38+ messages in thread
From: Sitsofe Wheeler @ 2017-10-31 23:27 UTC (permalink / raw)
To: David Hare; +Cc: Jens Axboe, fio
Hi,
You might find some useful tips in previous "go faster" fio mailing
list threads like https://www.spinics.net/lists/fio/msg05984.html .
On 31 October 2017 at 23:07, David Hare <david.hare@primarydata.com> wrote:
> Just a little back story, I am testing a 88core Dell R930 with 30 NVMe
> drives, (6) Kingston PCIe NVMe each has 4 m.2 drive ie. 24 drives. Then
> there is (6) 2.5" NVMe u.2 drives.
>
> I started out with my usual tools benchmark tools and was not able to get
> anything above 36GB/s and I had to run the multiple version of the tools at
> the same time.
>
> I found Diskspd for windows and it has a lot of control with threads and
> affinity I was able to reach 52.7GB/s which is pretty close to theoretical
> max of the drives of about 56GB/s.
>
> I am trying to achieve the same results or close with FIO so I can compares
> Windows with Centos and ultimately our version of Redhat. I am having issue
> tuning FIO much above 14GB/s
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-10-31 23:27 ` Sitsofe Wheeler
@ 2017-10-31 23:39 ` David Hare
0 siblings, 0 replies; 38+ messages in thread
From: David Hare @ 2017-10-31 23:39 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: Jens Axboe, fio
[-- Attachment #1: Type: text/plain, Size: 1694 bytes --]
Thanks for the pointer!
-Dave
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
Sent: Tuesday, October 31, 2017 4:27 PM
To: David Hare <david.hare@primarydata.com>
Cc: Jens Axboe <axboe@kernel.dk>; fio@vger.kernel.org
Subject: Re: FIO windows
Hi,
You might find some useful tips in previous "go faster" fio mailing list threads like https://www.spinics.net/lists/fio/msg05984.html .
On 31 October 2017 at 23:07, David Hare <david.hare@primarydata.com> wrote:
> Just a little back story, I am testing a 88core Dell R930 with 30 NVMe
> drives, (6) Kingston PCIe NVMe each has 4 m.2 drive ie. 24 drives.
> Then there is (6) 2.5" NVMe u.2 drives.
>
> I started out with my usual tools benchmark tools and was not able to
> get anything above 36GB/s and I had to run the multiple version of the
> tools at the same time.
>
> I found Diskspd for windows and it has a lot of control with threads
> and affinity I was able to reach 52.7GB/s which is pretty close to
> theoretical max of the drives of about 56GB/s.
>
> I am trying to achieve the same results or close with FIO so I can
> compares Windows with Centos and ultimately our version of Redhat. I
> am having issue tuning FIO much above 14GB/s
--
Sitsofe | http://sucs.org/~sits/
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
[-- Attachment #2: Type: text/html, Size: 2343 bytes --]
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-10-31 23:06 ` Sitsofe Wheeler
2017-10-31 23:09 ` David Hare
@ 2017-11-01 16:44 ` Jens Axboe
2017-11-01 17:05 ` Jens Axboe
1 sibling, 1 reply; 38+ messages in thread
From: Jens Axboe @ 2017-11-01 16:44 UTC (permalink / raw)
To: Sitsofe Wheeler, David Hare; +Cc: fio
I just tried to reproduce on Linux, and I can. If the files don't exist and
I run this job:
[global]
blocksize=64k
direct=1
group_reporting
rw=read
size=512m
[asdf]
filename=testfile1:testfile2:testfile3
it works fine. Then I change the size to be 250m AND add time_based
and a longer runtime to ensure we hit the roll-around, and it fails:
fio win.fio
asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=psync, iodepth=1
fio-3.1-76-g08bd-dirty
Starting 1 process
fio: io_u error on file testfile1: Invalid argument: read offset=21846, buflen=65536
fio: pid=18198, err=22/file:io_u.c:1770, func=io_u error, error=Invalid argument
So looks like a bug in the reset part. I'll take a look.
On 10/31/2017 05:06 PM, Sitsofe Wheeler wrote:
> OK I see the problem too when using this on Windows:
> $ ./fio --size=1g --filename
> fio1.tmp:fio2.tmp:fio3.tmp:fio4.tmp:fio5.tmp:fio6.tmp:fio7.tmp:fio8.tmp:fio9.tmp
> --name=go --create_only=1
> $ ./fio --size=250m --filename fio1.tmp:fio2.tmp:fio3.tmp --name=go
> --direct=1 --bs=64k --runtime=1m --time_based --thread
>
> Using --debug=all shows the following:
> io 2796 getevents: 1
> io 2796 io complete: io_u 0000000007BD8F00:
> off=87359488/len=65536/ddir=0io 2796 /fio3.tmpio 2796
> file 2796 put file fio3.tmp, ref=2
> file 2796 trying file fio1.tmp 11
> file 2796 goodf=1, badf=2, ff=11
> file 2796 get_next_file_rr: 00007FF5FEEA4610
> file 2796 get_next_file: 00007FF5FEEA4610 [fio1.tmp]
> file 2796 get file fio1.tmp, ref=1
> io 2796 fill_io_u: io_u 0000000007BD8F00:
> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
> io 2796 prep: io_u 0000000007BD8F00:
> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
> io 2796 queue: io_u 0000000007BD8F00:
> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio
> helperthread 2796
> 2796 since_ss: 0, next_ss: 1000, next_log: 495, msec_to_next_event: 136
> file 2796 put file fio1.tmp, ref=2
> io 2796 io_u_queued_complete: min=0
> io 2796 getevents: 0
> process 2796 pid=2936: runstate RUNNING -> FINISHING
> fio: pid=2936, err=22/file:ioengines.c:335, func=td_io_queue,
> error=Invalid argument
>
> off=21846 is not a multiple of 512.
>
> $ du -b fio*tmp
> 119304647 fio1.tmp
> 119304647 fio2.tmp
> 119304647 fio3.tmp
> 119304647 fio4.tmp
> 119304647 fio5.tmp
> 119304647 fio6.tmp
> 119304647 fio7.tmp
> 119304647 fio8.tmp
> 119304647 fio9.tmp
>
> I think this is all hinting at a bug in fio (perhaps when working with
> multiple pre-existing files that are bigger than size specified) but I
> think I'm done for the day...
>
> On 31 October 2017 at 22:51, David Hare <david.hare@primarydata.com> wrote:
>>
>> Not sure what you mean by one file? The drives are local NTFS formatted with
>> Allocation unit size of 64k, they range in size from with are 186gb - 372gb.
>
--
Jens Axboe
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: FIO windows
2017-11-01 16:44 ` Jens Axboe
@ 2017-11-01 17:05 ` Jens Axboe
2017-11-01 17:39 ` David Hare
0 siblings, 1 reply; 38+ messages in thread
From: Jens Axboe @ 2017-11-01 17:05 UTC (permalink / raw)
To: Sitsofe Wheeler, David Hare; +Cc: fio
Seems to me that we should just reset back to zero, and not worry
about the io_size at all. This is what is screwing things up, and
I can't think of why we would attempt to be more clever in
wrapping around. The below should do the trick.
David, once appveyor finishes, you should be able to download the build
here and re-test:
https://ci.appveyor.com/project/axboe/fio/build/job/p0misp4ehe96ti7d/artifacts
diff --git a/io_u.c b/io_u.c
index 4246edff0b2f..aac74bf6f7ad 100644
--- a/io_u.c
+++ b/io_u.c
@@ -363,14 +363,7 @@ static int get_next_seq_offset(struct thread_data *td, struct fio_file *f,
if (f->last_pos[ddir] >= f->io_size + get_start_offset(td, f) &&
o->time_based) {
- struct thread_options *o = &td->o;
- uint64_t io_size = f->io_size + (f->io_size % o->min_bs[ddir]);
-
- if (io_size > f->last_pos[ddir])
- f->last_pos[ddir] = 0;
- else
- f->last_pos[ddir] = f->last_pos[ddir] - io_size;
-
+ f->last_pos[ddir] = 0;
loop_cache_invalidate(td, f);
}
On 11/01/2017 10:44 AM, Jens Axboe wrote:
> I just tried to reproduce on Linux, and I can. If the files don't exist and
> I run this job:
>
> [global]
> blocksize=64k
> direct=1
> group_reporting
> rw=read
> size=512m
>
> [asdf]
> filename=testfile1:testfile2:testfile3
>
> it works fine. Then I change the size to be 250m AND add time_based
> and a longer runtime to ensure we hit the roll-around, and it fails:
>
> fio win.fio
> asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=psync, iodepth=1
> fio-3.1-76-g08bd-dirty
> Starting 1 process
> fio: io_u error on file testfile1: Invalid argument: read offset=21846, buflen=65536
> fio: pid=18198, err=22/file:io_u.c:1770, func=io_u error, error=Invalid argument
>
> So looks like a bug in the reset part. I'll take a look.
>
>
> On 10/31/2017 05:06 PM, Sitsofe Wheeler wrote:
>> OK I see the problem too when using this on Windows:
>> $ ./fio --size=1g --filename
>> fio1.tmp:fio2.tmp:fio3.tmp:fio4.tmp:fio5.tmp:fio6.tmp:fio7.tmp:fio8.tmp:fio9.tmp
>> --name=go --create_only=1
>> $ ./fio --size=250m --filename fio1.tmp:fio2.tmp:fio3.tmp --name=go
>> --direct=1 --bs=64k --runtime=1m --time_based --thread
>>
>> Using --debug=all shows the following:
>> io 2796 getevents: 1
>> io 2796 io complete: io_u 0000000007BD8F00:
>> off=87359488/len=65536/ddir=0io 2796 /fio3.tmpio 2796
>> file 2796 put file fio3.tmp, ref=2
>> file 2796 trying file fio1.tmp 11
>> file 2796 goodf=1, badf=2, ff=11
>> file 2796 get_next_file_rr: 00007FF5FEEA4610
>> file 2796 get_next_file: 00007FF5FEEA4610 [fio1.tmp]
>> file 2796 get file fio1.tmp, ref=1
>> io 2796 fill_io_u: io_u 0000000007BD8F00:
>> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
>> io 2796 prep: io_u 0000000007BD8F00:
>> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
>> io 2796 queue: io_u 0000000007BD8F00:
>> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio
>> helperthread 2796
>> 2796 since_ss: 0, next_ss: 1000, next_log: 495, msec_to_next_event: 136
>> file 2796 put file fio1.tmp, ref=2
>> io 2796 io_u_queued_complete: min=0
>> io 2796 getevents: 0
>> process 2796 pid=2936: runstate RUNNING -> FINISHING
>> fio: pid=2936, err=22/file:ioengines.c:335, func=td_io_queue,
>> error=Invalid argument
>>
>> off=21846 is not a multiple of 512.
>>
>> $ du -b fio*tmp
>> 119304647 fio1.tmp
>> 119304647 fio2.tmp
>> 119304647 fio3.tmp
>> 119304647 fio4.tmp
>> 119304647 fio5.tmp
>> 119304647 fio6.tmp
>> 119304647 fio7.tmp
>> 119304647 fio8.tmp
>> 119304647 fio9.tmp
>>
>> I think this is all hinting at a bug in fio (perhaps when working with
>> multiple pre-existing files that are bigger than size specified) but I
>> think I'm done for the day...
>>
>> On 31 October 2017 at 22:51, David Hare <david.hare@primarydata.com> wrote:
>>>
>>> Not sure what you mean by one file? The drives are local NTFS formatted with
>>> Allocation unit size of 64k, they range in size from with are 186gb - 372gb.
>>
>
>
--
Jens Axboe
^ permalink raw reply related [flat|nested] 38+ messages in thread
* RE: FIO windows
2017-11-01 17:05 ` Jens Axboe
@ 2017-11-01 17:39 ` David Hare
0 siblings, 0 replies; 38+ messages in thread
From: David Hare @ 2017-11-01 17:39 UTC (permalink / raw)
To: Jens Axboe, Sitsofe Wheeler; +Cc: fio
Ok, thanks.
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Wednesday, November 1, 2017 10:05 AM
To: Sitsofe Wheeler <sitsofe@gmail.com>; David Hare <david.hare@primarydata.com>
Cc: fio@vger.kernel.org
Subject: Re: FIO windows
Seems to me that we should just reset back to zero, and not worry about the io_size at all. This is what is screwing things up, and I can't think of why we would attempt to be more clever in wrapping around. The below should do the trick.
David, once appveyor finishes, you should be able to download the build here and re-test:
https://ci.appveyor.com/project/axboe/fio/build/job/p0misp4ehe96ti7d/artifacts
diff --git a/io_u.c b/io_u.c
index 4246edff0b2f..aac74bf6f7ad 100644
--- a/io_u.c
+++ b/io_u.c
@@ -363,14 +363,7 @@ static int get_next_seq_offset(struct thread_data *td, struct fio_file *f,
if (f->last_pos[ddir] >= f->io_size + get_start_offset(td, f) &&
o->time_based) {
- struct thread_options *o = &td->o;
- uint64_t io_size = f->io_size + (f->io_size % o->min_bs[ddir]);
-
- if (io_size > f->last_pos[ddir])
- f->last_pos[ddir] = 0;
- else
- f->last_pos[ddir] = f->last_pos[ddir] - io_size;
-
+ f->last_pos[ddir] = 0;
loop_cache_invalidate(td, f);
}
On 11/01/2017 10:44 AM, Jens Axboe wrote:
> I just tried to reproduce on Linux, and I can. If the files don't
> exist and I run this job:
>
> [global]
> blocksize=64k
> direct=1
> group_reporting
> rw=read
> size=512m
>
> [asdf]
> filename=testfile1:testfile2:testfile3
>
> it works fine. Then I change the size to be 250m AND add time_based
> and a longer runtime to ensure we hit the roll-around, and it fails:
>
> fio win.fio
> asdf: (g=0): rw=read, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T)
> 64.0KiB-64.0KiB, ioengine=psync, iodepth=1 fio-3.1-76-g08bd-dirty
> Starting 1 process
> fio: io_u error on file testfile1: Invalid argument: read
> offset=21846, buflen=65536
> fio: pid=18198, err=22/file:io_u.c:1770, func=io_u error,
> error=Invalid argument
>
> So looks like a bug in the reset part. I'll take a look.
>
>
> On 10/31/2017 05:06 PM, Sitsofe Wheeler wrote:
>> OK I see the problem too when using this on Windows:
>> $ ./fio --size=1g --filename
>> fio1.tmp:fio2.tmp:fio3.tmp:fio4.tmp:fio5.tmp:fio6.tmp:fio7.tmp:fio8.t
>> mp:fio9.tmp
>> --name=go --create_only=1
>> $ ./fio --size=250m --filename fio1.tmp:fio2.tmp:fio3.tmp --name=go
>> --direct=1 --bs=64k --runtime=1m --time_based --thread
>>
>> Using --debug=all shows the following:
>> io 2796 getevents: 1
>> io 2796 io complete: io_u 0000000007BD8F00:
>> off=87359488/len=65536/ddir=0io 2796 /fio3.tmpio 2796
>> file 2796 put file fio3.tmp, ref=2
>> file 2796 trying file fio1.tmp 11
>> file 2796 goodf=1, badf=2, ff=11
>> file 2796 get_next_file_rr: 00007FF5FEEA4610
>> file 2796 get_next_file: 00007FF5FEEA4610 [fio1.tmp]
>> file 2796 get file fio1.tmp, ref=1
>> io 2796 fill_io_u: io_u 0000000007BD8F00:
>> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
>> io 2796 prep: io_u 0000000007BD8F00:
>> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio 2796
>> io 2796 queue: io_u 0000000007BD8F00:
>> off=21846/len=65536/ddir=0io 2796 /fio1.tmpio
>> helperthread 2796
>> 2796 since_ss: 0, next_ss: 1000, next_log: 495, msec_to_next_event: 136
>> file 2796 put file fio1.tmp, ref=2
>> io 2796 io_u_queued_complete: min=0
>> io 2796 getevents: 0
>> process 2796 pid=2936: runstate RUNNING -> FINISHING
>> fio: pid=2936, err=22/file:ioengines.c:335, func=td_io_queue,
>> error=Invalid argument
>>
>> off=21846 is not a multiple of 512.
>>
>> $ du -b fio*tmp
>> 119304647 fio1.tmp
>> 119304647 fio2.tmp
>> 119304647 fio3.tmp
>> 119304647 fio4.tmp
>> 119304647 fio5.tmp
>> 119304647 fio6.tmp
>> 119304647 fio7.tmp
>> 119304647 fio8.tmp
>> 119304647 fio9.tmp
>>
>> I think this is all hinting at a bug in fio (perhaps when working
>> with multiple pre-existing files that are bigger than size specified)
>> but I think I'm done for the day...
>>
>> On 31 October 2017 at 22:51, David Hare <david.hare@primarydata.com> wrote:
>>>
>>> Not sure what you mean by one file? The drives are local NTFS
>>> formatted with Allocation unit size of 64k, they range in size from with are 186gb - 372gb.
>>
>
>
--
Jens Axboe
^ permalink raw reply related [flat|nested] 38+ messages in thread
end of thread, other threads:[~2017-11-01 17:39 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <MWHPR11MB2045B4FFEB905261E1FAFD1E875E0@MWHPR11MB2045.namprd11.prod.outlook.com>
[not found] ` <863271c2-eeb4-b770-bdfa-89746d4d9c70@kernel.dk>
[not found] ` <MWHPR11MB2045DFA579DB2BDC5B1EA466875E0@MWHPR11MB2045.namprd11.prod.outlook.com>
2017-10-31 18:51 ` FIO windows Jens Axboe
2017-10-31 19:00 ` Rebecca Cran
2017-10-31 19:07 ` David Hare
2017-10-31 19:43 ` Rebecca Cran
2017-10-31 19:44 ` David Hare
2017-10-31 19:46 ` Sitsofe Wheeler
2017-10-31 19:49 ` David Hare
2017-10-31 19:50 ` Jens Axboe
2017-10-31 19:55 ` David Hare
2017-10-31 19:59 ` Sitsofe Wheeler
2017-10-31 20:21 ` David Hare
2017-10-31 20:29 ` Sitsofe Wheeler
2017-10-31 20:46 ` David Hare
2017-10-31 21:27 ` Sitsofe Wheeler
2017-10-31 21:46 ` David Hare
2017-10-31 21:54 ` Sitsofe Wheeler
2017-10-31 22:03 ` David Hare
2017-10-31 22:05 ` Sitsofe Wheeler
2017-10-31 22:06 ` David Hare
2017-10-31 22:14 ` Sitsofe Wheeler
2017-10-31 22:33 ` David Hare
2017-10-31 22:45 ` Sitsofe Wheeler
2017-10-31 22:51 ` David Hare
2017-10-31 23:06 ` Sitsofe Wheeler
2017-10-31 23:09 ` David Hare
2017-10-31 23:19 ` Sitsofe Wheeler
2017-11-01 16:44 ` Jens Axboe
2017-11-01 17:05 ` Jens Axboe
2017-11-01 17:39 ` David Hare
2017-10-31 22:56 ` David Hare
2017-10-31 23:07 ` David Hare
2017-10-31 23:13 ` Sitsofe Wheeler
2017-10-31 23:27 ` Sitsofe Wheeler
2017-10-31 23:39 ` David Hare
2017-10-31 22:45 ` David Hare
2017-10-31 22:28 ` David Hare
2017-10-31 20:47 ` David Hare
2017-10-31 20:23 ` Sitsofe Wheeler
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.