All of lore.kernel.org
 help / color / mirror / Atom feed
* fio serialize across jobs
@ 2018-07-06 22:14 Jeff Furlong
  2018-07-07  4:23 ` Sitsofe Wheeler
  0 siblings, 1 reply; 5+ messages in thread
From: Jeff Furlong @ 2018-07-06 22:14 UTC (permalink / raw)
  To: fio

Hi All,
Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue.  Can this feature be applied to multiple jobs?  Consider:

fio --ioengine=libaio --direct=1  --filename=/dev/nvme0n1 --time_based --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite --runtime=5s --serialize_overlap=1

Would we have to call in_flight_overlap() in a loop with all thread data pointers?

Thanks.

Regards,
Jeff


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fio serialize across jobs
  2018-07-06 22:14 fio serialize across jobs Jeff Furlong
@ 2018-07-07  4:23 ` Sitsofe Wheeler
  2018-07-09 17:58   ` Jeff Furlong
  0 siblings, 1 reply; 5+ messages in thread
From: Sitsofe Wheeler @ 2018-07-07  4:23 UTC (permalink / raw)
  To: Jeff Furlong; +Cc: fio

Hi,

On 6 July 2018 at 23:14, Jeff Furlong <jeff.furlong@wdc.com> wrote:
> Hi All,
> Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue.  Can this feature be applied to multiple jobs?  Consider:
>
> fio --ioengine=libaio --direct=1  --filename=/dev/nvme0n1 --time_based --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite --runtime=5s --serialize_overlap=1
>
> Would we have to call in_flight_overlap() in a loop with all thread data pointers?

It would be a be more work (the original serialize_overlap was fiddly
to make correct but was easy to get going) but I can see why you might
want what you describe. The big problem is that I can't see how you
can do it without introducing a large amount of locking into the fio's
I/O path. I think this would also border on a feature that diskspd has
where multiple threads can do I/O to a single file but they all share
a randommap...

-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: fio serialize across jobs
  2018-07-07  4:23 ` Sitsofe Wheeler
@ 2018-07-09 17:58   ` Jeff Furlong
  2018-07-12 17:13     ` Jens Axboe
  0 siblings, 1 reply; 5+ messages in thread
From: Jeff Furlong @ 2018-07-09 17:58 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: fio

What if we limited the jobs to 2 and limited the second job's QD to 1?  Might that limit the locking overhead?

Regards,
Jeff

-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@gmail.com] 
Sent: Friday, July 6, 2018 9:23 PM
To: Jeff Furlong <jeff.furlong@wdc.com>
Cc: fio@vger.kernel.org
Subject: Re: fio serialize across jobs

Hi,

On 6 July 2018 at 23:14, Jeff Furlong <jeff.furlong@wdc.com> wrote:
> Hi All,
> Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue.  Can this feature be applied to multiple jobs?  Consider:
>
> fio --ioengine=libaio --direct=1  --filename=/dev/nvme0n1 --time_based 
> --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite 
> --runtime=5s --serialize_overlap=1
>
> Would we have to call in_flight_overlap() in a loop with all thread data pointers?

It would be a be more work (the original serialize_overlap was fiddly to make correct but was easy to get going) but I can see why you might want what you describe. The big problem is that I can't see how you can do it without introducing a large amount of locking into the fio's I/O path. I think this would also border on a feature that diskspd has where multiple threads can do I/O to a single file but they all share a randommap...

--
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fio serialize across jobs
  2018-07-09 17:58   ` Jeff Furlong
@ 2018-07-12 17:13     ` Jens Axboe
  2018-08-15  0:04       ` Jeff Furlong
  0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2018-07-12 17:13 UTC (permalink / raw)
  To: Jeff Furlong, Sitsofe Wheeler; +Cc: fio

General comments on this... Fio does have a notion of having multiple
workers per state, most notably for the offload IO model (io_submit_mode).
That would make ethe most sense to utilize for something like this,
rather than having independent jobs that need to share basically everything.
You end up with a rather large rework, only to arrive at what the offload
IO model can already do.


On 7/9/18 11:58 AM, Jeff Furlong wrote:
> What if we limited the jobs to 2 and limited the second job's QD to 1?  Might that limit the locking overhead?
> 
> Regards,
> Jeff
> 
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com] 
> Sent: Friday, July 6, 2018 9:23 PM
> To: Jeff Furlong <jeff.furlong@wdc.com>
> Cc: fio@vger.kernel.org
> Subject: Re: fio serialize across jobs
> 
> Hi,
> 
> On 6 July 2018 at 23:14, Jeff Furlong <jeff.furlong@wdc.com> wrote:
>> Hi All,
>> Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue.  Can this feature be applied to multiple jobs?  Consider:
>>
>> fio --ioengine=libaio --direct=1  --filename=/dev/nvme0n1 --time_based 
>> --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite 
>> --runtime=5s --serialize_overlap=1
>>
>> Would we have to call in_flight_overlap() in a loop with all thread data pointers?
> 
> It would be a be more work (the original serialize_overlap was fiddly to make correct but was easy to get going) but I can see why you might want what you describe. The big problem is that I can't see how you can do it without introducing a large amount of locking into the fio's I/O path. I think this would also border on a feature that diskspd has where multiple threads can do I/O to a single file but they all share a randommap...
> 
> --
> Sitsofe | http://sucs.org/~sits/
> N�����r��y���b�X��ǧv�^�)޺{.n�+�������\x17��ܨ}���Ơz�&j:+v���\a����zZ+��+zf���h���~����i���z�\x1e�w���?����&�)ߢ^[fl===
> 


-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: fio serialize across jobs
  2018-07-12 17:13     ` Jens Axboe
@ 2018-08-15  0:04       ` Jeff Furlong
  0 siblings, 0 replies; 5+ messages in thread
From: Jeff Furlong @ 2018-08-15  0:04 UTC (permalink / raw)
  To: Jens Axboe, Sitsofe Wheeler; +Cc: fio

Going back to this topic.  

Suppose --serialize_overlap=1 and --io_submit_mode=offload.  Would the suggestion be to apply in_flight_overlap() within workqueue_enqueue()?

Regards,
Jeff

-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf Of Jens Axboe
Sent: Thursday, July 12, 2018 10:13 AM
To: Jeff Furlong <jeff.furlong@wdc.com>; Sitsofe Wheeler <sitsofe@gmail.com>
Cc: fio@vger.kernel.org
Subject: Re: fio serialize across jobs

General comments on this... Fio does have a notion of having multiple workers per state, most notably for the offload IO model (io_submit_mode).
That would make ethe most sense to utilize for something like this, rather than having independent jobs that need to share basically everything.
You end up with a rather large rework, only to arrive at what the offload IO model can already do.


On 7/9/18 11:58 AM, Jeff Furlong wrote:
> What if we limited the jobs to 2 and limited the second job's QD to 1?  Might that limit the locking overhead?
> 
> Regards,
> Jeff
> 
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@gmail.com]
> Sent: Friday, July 6, 2018 9:23 PM
> To: Jeff Furlong <jeff.furlong@wdc.com>
> Cc: fio@vger.kernel.org
> Subject: Re: fio serialize across jobs
> 
> Hi,
> 
> On 6 July 2018 at 23:14, Jeff Furlong <jeff.furlong@wdc.com> wrote:
>> Hi All,
>> Back in commit 997b5680d139ce82c2034ba3a0d602cfd778b89b "fio: add serialize_overlap option" the feature was added to prevent write/trim race conditions within the queue.  Can this feature be applied to multiple jobs?  Consider:
>>
>> fio --ioengine=libaio --direct=1  --filename=/dev/nvme0n1 
>> --time_based
>> --name=test1 --rw=randwrite --runtime=5s --name=test2 --rw=randwrite 
>> --runtime=5s --serialize_overlap=1
>>
>> Would we have to call in_flight_overlap() in a loop with all thread data pointers?
> 
> It would be a be more work (the original serialize_overlap was fiddly to make correct but was easy to get going) but I can see why you might want what you describe. The big problem is that I can't see how you can do it without introducing a large amount of locking into the fio's I/O path. I think this would also border on a feature that diskspd has where multiple threads can do I/O to a single file but they all share a randommap...
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-08-15  0:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-06 22:14 fio serialize across jobs Jeff Furlong
2018-07-07  4:23 ` Sitsofe Wheeler
2018-07-09 17:58   ` Jeff Furlong
2018-07-12 17:13     ` Jens Axboe
2018-08-15  0:04       ` Jeff Furlong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.