All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
@ 2017-09-06 21:09 Oleksandr Natalenko
  2017-09-06 21:22 ` Tom Nguyen
  0 siblings, 1 reply; 8+ messages in thread
From: Oleksandr Natalenko @ 2017-09-06 21:09 UTC (permalink / raw)
  To: ming.lei; +Cc: linux-block, Jens Axboe

Feel free to add:

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>

since I'm running this on 4 machines without issues.

> Hi Jens,
>
> Ping...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
  2017-09-06 21:09 [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance Oleksandr Natalenko
@ 2017-09-06 21:22 ` Tom Nguyen
  0 siblings, 0 replies; 8+ messages in thread
From: Tom Nguyen @ 2017-09-06 21:22 UTC (permalink / raw)
  To: Oleksandr Natalenko, ming.lei; +Cc: linux-block, Jens Axboe

Likewise with no problems on my work laptop with 4 days uptime.

Tested-by: Tom Nguyen <tom81094@gmail.com>


On 09/07/2017 04:09 AM, Oleksandr Natalenko wrote:
> Feel free to add:
>
> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
>
> since I'm running this on 4 machines without issues.
>
>> Hi Jens,
>>
>> Ping...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
  2017-09-19 19:25 ` Omar Sandoval
@ 2017-09-20  3:18   ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2017-09-20  3:18 UTC (permalink / raw)
  To: Omar Sandoval
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Bart Van Assche,
	Laurence Oberman, Paolo Valente, Mel Gorman

On Tue, Sep 19, 2017 at 12:25:15PM -0700, Omar Sandoval wrote:
> On Sat, Sep 02, 2017 at 11:17:15PM +0800, Ming Lei wrote:
> > Hi,
> > 
> > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > found that I/O performance is much bad with mq-deadline, especially
> > about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> > SRP...)
> > 
> > Turns out one big issue causes the performance regression: requests
> > are still dequeued from sw queue/scheduler queue even when ldd's
> > queue is busy, so I/O merge becomes quite difficult to make, then
> > sequential IO degrades a lot.
> > 
> > The 1st five patches improve this situation, and brings back
> > some performance loss.
> 
> Sorry it took so long, I've reviewed or commented on patches 1-6. When
> you send v5, could you just send patches 1-6, and split the rest as
> their own series?

Sure, no problem.

Thanks for your review!

-- 
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
  2017-09-02 15:17 Ming Lei
  2017-09-04  9:12 ` Paolo Valente
@ 2017-09-19 19:25 ` Omar Sandoval
  2017-09-20  3:18   ` Ming Lei
  1 sibling, 1 reply; 8+ messages in thread
From: Omar Sandoval @ 2017-09-19 19:25 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Bart Van Assche,
	Laurence Oberman, Paolo Valente, Mel Gorman

On Sat, Sep 02, 2017 at 11:17:15PM +0800, Ming Lei wrote:
> Hi,
> 
> In Red Hat internal storage test wrt. blk-mq scheduler, we
> found that I/O performance is much bad with mq-deadline, especially
> about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> SRP...)
> 
> Turns out one big issue causes the performance regression: requests
> are still dequeued from sw queue/scheduler queue even when ldd's
> queue is busy, so I/O merge becomes quite difficult to make, then
> sequential IO degrades a lot.
> 
> The 1st five patches improve this situation, and brings back
> some performance loss.

Sorry it took so long, I've reviewed or commented on patches 1-6. When
you send v5, could you just send patches 1-6, and split the rest as
their own series?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
  2017-09-05  1:39   ` Ming Lei
@ 2017-09-06 15:27     ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2017-09-06 15:27 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Paolo Valente, linux-block, Christoph Hellwig, Bart Van Assche,
	Laurence Oberman, Mel Gorman

On Tue, Sep 05, 2017 at 09:39:51AM +0800, Ming Lei wrote:
> On Mon, Sep 04, 2017 at 11:12:49AM +0200, Paolo Valente wrote:
> > 
> > > Il giorno 02 set 2017, alle ore 17:17, Ming Lei <ming.lei@redhat.com> ha scritto:
> > > 
> > > Hi,
> > > 
> > > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > > found that I/O performance is much bad with mq-deadline, especially
> > > about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> > > SRP...)
> > > 
> > > Turns out one big issue causes the performance regression: requests
> > > are still dequeued from sw queue/scheduler queue even when ldd's
> > > queue is busy, so I/O merge becomes quite difficult to make, then
> > > sequential IO degrades a lot.
> > > 
> > > The 1st five patches improve this situation, and brings back
> > > some performance loss.
> > > 
> > > Patch 6 ~ 7 uses q->queue_depth as hint for setting up
> > > scheduler queue depth.
> > > 
> > > Patch 8 ~ 15 improve bio merge via hash table in sw queue,
> > > which makes bio merge more efficient than current approch
> > > in which only the last 8 requests are checked. Since patch
> > > 6~14 converts to the scheduler way of dequeuing one request
> > > from sw queue one time for SCSI device, and the times of
> > > acquring ctx->lock is increased, and merging bio via hash
> > > table decreases holding time of ctx->lock and should eliminate
> > > effect from patch 14. 
> > > 
> > > With this changes, SCSI-MQ sequential I/O performance is
> > > improved much, Paolo reported that mq-deadline performance
> > > improved much[2] in his dbench test wrt V2. Also performanc
> > > improvement on lpfc/qla2xx was observed with V1.[1]
> > > 
> > > Also Bart worried that this patchset may affect SRP, so provide
> > > test data on SCSI SRP this time:
> > > 
> > > - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
> > > - system(16 cores, dual sockets, mem: 96G)
> > > 
> > >          |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+ 
> > > -----------------------------------------------------
> > > IOPS(K)  |  DEADLINE   |    NONE     |    NONE     
> > > -----------------------------------------------------
> > > read      |      587.81 |      511.96 |      518.51 
> > > -----------------------------------------------------
> > > randread  |      116.44 |      142.99 |      142.46 
> > > -----------------------------------------------------
> > > write     |      580.87 |       536.4 |      582.15 
> > > -----------------------------------------------------
> > > randwrite |      104.95 |      124.89 |      123.99 
> > > -----------------------------------------------------
> > > 
> > > 
> > >          |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+ 
> > > -----------------------------------------------------
> > > IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE  
> > > -----------------------------------------------------
> > > read      |      587.81 |       158.7 |      450.41 
> > > -----------------------------------------------------
> > > randread  |      116.44 |      142.04 |      142.72 
> > > -----------------------------------------------------
> > > write     |      580.87 |      136.61 |      569.37 
> > > -----------------------------------------------------
> > > randwrite |      104.95 |      123.14 |      124.36 
> > > -----------------------------------------------------
> > > 
> > > *: v4.13-rc6+ means v4.13-rc6 with block for-next
> > > 
> > > 
> > > Please consider to merge to V4.4.
> > > 
> > > [1] http://marc.info/?l=linux-block&m=150151989915776&w=2
> > > [2] https://marc.info/?l=linux-block&m=150217980602843&w=2
> > > 
> > > V4:
> > > 	- add Reviewed-by tag
> > > 	- some trival change: typo fix in commit log or comment,
> > > 	variable name, no actual functional change
> > > 
> > > V3:
> > > 	- totally round robin for picking req from ctx, as suggested
> > > 	by Bart
> > > 	- remove one local variable in __sbitmap_for_each_set()
> > > 	- drop patches of single dispatch list, which can improve
> > > 	performance on mq-deadline, but cause a bit degrade on
> > > 	none because all hctxs need to be checked after ->dispatch
> > > 	is flushed. Will post it again once it is mature.
> > > 	- rebase on v4.13-rc6 with block for-next
> > > 
> > > V2:
> > > 	- dequeue request from sw queues in round roubin's style
> > > 	as suggested by Bart, and introduces one helper in sbitmap
> > > 	for this purpose
> > > 	- improve bio merge via hash table from sw queue
> > > 	- add comments about using DISPATCH_BUSY state in lockless way,
> > > 	simplifying handling on busy state,
> > > 	- hold ctx->lock when clearing ctx busy bit as suggested
> > > 	by Bart
> > > 
> > > 
> > 
> > Tested-by: Paolo Valente <paolo.valente@linaro.org>
> 
> Hi Jens,
> 
> Is there any chance to make this patchset merged to V4.4?

Hi Jens,

Ping...

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
  2017-09-04  9:12 ` Paolo Valente
@ 2017-09-05  1:39   ` Ming Lei
  2017-09-06 15:27     ` Ming Lei
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2017-09-05  1:39 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Paolo Valente, linux-block, Christoph Hellwig, Bart Van Assche,
	Laurence Oberman, Mel Gorman

On Mon, Sep 04, 2017 at 11:12:49AM +0200, Paolo Valente wrote:
> 
> > Il giorno 02 set 2017, alle ore 17:17, Ming Lei <ming.lei@redhat.com> ha scritto:
> > 
> > Hi,
> > 
> > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > found that I/O performance is much bad with mq-deadline, especially
> > about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> > SRP...)
> > 
> > Turns out one big issue causes the performance regression: requests
> > are still dequeued from sw queue/scheduler queue even when ldd's
> > queue is busy, so I/O merge becomes quite difficult to make, then
> > sequential IO degrades a lot.
> > 
> > The 1st five patches improve this situation, and brings back
> > some performance loss.
> > 
> > Patch 6 ~ 7 uses q->queue_depth as hint for setting up
> > scheduler queue depth.
> > 
> > Patch 8 ~ 15 improve bio merge via hash table in sw queue,
> > which makes bio merge more efficient than current approch
> > in which only the last 8 requests are checked. Since patch
> > 6~14 converts to the scheduler way of dequeuing one request
> > from sw queue one time for SCSI device, and the times of
> > acquring ctx->lock is increased, and merging bio via hash
> > table decreases holding time of ctx->lock and should eliminate
> > effect from patch 14. 
> > 
> > With this changes, SCSI-MQ sequential I/O performance is
> > improved much, Paolo reported that mq-deadline performance
> > improved much[2] in his dbench test wrt V2. Also performanc
> > improvement on lpfc/qla2xx was observed with V1.[1]
> > 
> > Also Bart worried that this patchset may affect SRP, so provide
> > test data on SCSI SRP this time:
> > 
> > - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
> > - system(16 cores, dual sockets, mem: 96G)
> > 
> >          |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+ 
> > -----------------------------------------------------
> > IOPS(K)  |  DEADLINE   |    NONE     |    NONE     
> > -----------------------------------------------------
> > read      |      587.81 |      511.96 |      518.51 
> > -----------------------------------------------------
> > randread  |      116.44 |      142.99 |      142.46 
> > -----------------------------------------------------
> > write     |      580.87 |       536.4 |      582.15 
> > -----------------------------------------------------
> > randwrite |      104.95 |      124.89 |      123.99 
> > -----------------------------------------------------
> > 
> > 
> >          |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+ 
> > -----------------------------------------------------
> > IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE  
> > -----------------------------------------------------
> > read      |      587.81 |       158.7 |      450.41 
> > -----------------------------------------------------
> > randread  |      116.44 |      142.04 |      142.72 
> > -----------------------------------------------------
> > write     |      580.87 |      136.61 |      569.37 
> > -----------------------------------------------------
> > randwrite |      104.95 |      123.14 |      124.36 
> > -----------------------------------------------------
> > 
> > *: v4.13-rc6+ means v4.13-rc6 with block for-next
> > 
> > 
> > Please consider to merge to V4.4.
> > 
> > [1] http://marc.info/?l=linux-block&m=150151989915776&w=2
> > [2] https://marc.info/?l=linux-block&m=150217980602843&w=2
> > 
> > V4:
> > 	- add Reviewed-by tag
> > 	- some trival change: typo fix in commit log or comment,
> > 	variable name, no actual functional change
> > 
> > V3:
> > 	- totally round robin for picking req from ctx, as suggested
> > 	by Bart
> > 	- remove one local variable in __sbitmap_for_each_set()
> > 	- drop patches of single dispatch list, which can improve
> > 	performance on mq-deadline, but cause a bit degrade on
> > 	none because all hctxs need to be checked after ->dispatch
> > 	is flushed. Will post it again once it is mature.
> > 	- rebase on v4.13-rc6 with block for-next
> > 
> > V2:
> > 	- dequeue request from sw queues in round roubin's style
> > 	as suggested by Bart, and introduces one helper in sbitmap
> > 	for this purpose
> > 	- improve bio merge via hash table from sw queue
> > 	- add comments about using DISPATCH_BUSY state in lockless way,
> > 	simplifying handling on busy state,
> > 	- hold ctx->lock when clearing ctx busy bit as suggested
> > 	by Bart
> > 
> > 
> 
> Tested-by: Paolo Valente <paolo.valente@linaro.org>

Hi Jens,

Is there any chance to make this patchset merged to V4.4?


Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
  2017-09-02 15:17 Ming Lei
@ 2017-09-04  9:12 ` Paolo Valente
  2017-09-05  1:39   ` Ming Lei
  2017-09-19 19:25 ` Omar Sandoval
  1 sibling, 1 reply; 8+ messages in thread
From: Paolo Valente @ 2017-09-04  9:12 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Bart Van Assche,
	Laurence Oberman, Mel Gorman


> Il giorno 02 set 2017, alle ore 17:17, Ming Lei <ming.lei@redhat.com> =
ha scritto:
>=20
> Hi,
>=20
> In Red Hat internal storage test wrt. blk-mq scheduler, we
> found that I/O performance is much bad with mq-deadline, especially
> about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> SRP...)
>=20
> Turns out one big issue causes the performance regression: requests
> are still dequeued from sw queue/scheduler queue even when ldd's
> queue is busy, so I/O merge becomes quite difficult to make, then
> sequential IO degrades a lot.
>=20
> The 1st five patches improve this situation, and brings back
> some performance loss.
>=20
> Patch 6 ~ 7 uses q->queue_depth as hint for setting up
> scheduler queue depth.
>=20
> Patch 8 ~ 15 improve bio merge via hash table in sw queue,
> which makes bio merge more efficient than current approch
> in which only the last 8 requests are checked. Since patch
> 6~14 converts to the scheduler way of dequeuing one request
> from sw queue one time for SCSI device, and the times of
> acquring ctx->lock is increased, and merging bio via hash
> table decreases holding time of ctx->lock and should eliminate
> effect from patch 14.=20
>=20
> With this changes, SCSI-MQ sequential I/O performance is
> improved much, Paolo reported that mq-deadline performance
> improved much[2] in his dbench test wrt V2. Also performanc
> improvement on lpfc/qla2xx was observed with V1.[1]
>=20
> Also Bart worried that this patchset may affect SRP, so provide
> test data on SCSI SRP this time:
>=20
> - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
> - system(16 cores, dual sockets, mem: 96G)
>=20
>          |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+=20
> -----------------------------------------------------
> IOPS(K)  |  DEADLINE   |    NONE     |    NONE    =20
> -----------------------------------------------------
> read      |      587.81 |      511.96 |      518.51=20
> -----------------------------------------------------
> randread  |      116.44 |      142.99 |      142.46=20
> -----------------------------------------------------
> write     |      580.87 |       536.4 |      582.15=20
> -----------------------------------------------------
> randwrite |      104.95 |      124.89 |      123.99=20
> -----------------------------------------------------
>=20
>=20
>          |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+=20
> -----------------------------------------------------
> IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE =20
> -----------------------------------------------------
> read      |      587.81 |       158.7 |      450.41=20
> -----------------------------------------------------
> randread  |      116.44 |      142.04 |      142.72=20
> -----------------------------------------------------
> write     |      580.87 |      136.61 |      569.37=20
> -----------------------------------------------------
> randwrite |      104.95 |      123.14 |      124.36=20
> -----------------------------------------------------
>=20
> *: v4.13-rc6+ means v4.13-rc6 with block for-next
>=20
>=20
> Please consider to merge to V4.4.
>=20
> [1] http://marc.info/?l=3Dlinux-block&m=3D150151989915776&w=3D2
> [2] https://marc.info/?l=3Dlinux-block&m=3D150217980602843&w=3D2
>=20
> V4:
> 	- add Reviewed-by tag
> 	- some trival change: typo fix in commit log or comment,
> 	variable name, no actual functional change
>=20
> V3:
> 	- totally round robin for picking req from ctx, as suggested
> 	by Bart
> 	- remove one local variable in __sbitmap_for_each_set()
> 	- drop patches of single dispatch list, which can improve
> 	performance on mq-deadline, but cause a bit degrade on
> 	none because all hctxs need to be checked after ->dispatch
> 	is flushed. Will post it again once it is mature.
> 	- rebase on v4.13-rc6 with block for-next
>=20
> V2:
> 	- dequeue request from sw queues in round roubin's style
> 	as suggested by Bart, and introduces one helper in sbitmap
> 	for this purpose
> 	- improve bio merge via hash table from sw queue
> 	- add comments about using DISPATCH_BUSY state in lockless way,
> 	simplifying handling on busy state,
> 	- hold ctx->lock when clearing ctx busy bit as suggested
> 	by Bart
>=20
>=20

Tested-by: Paolo Valente <paolo.valente@linaro.org>

> Ming Lei (14):
>  blk-mq-sched: fix scheduler bad performance
>  sbitmap: introduce __sbitmap_for_each_set()
>  blk-mq: introduce blk_mq_dispatch_rq_from_ctx()
>  blk-mq-sched: move actual dispatching into one helper
>  blk-mq-sched: improve dispatching from sw queue
>  blk-mq-sched: don't dequeue request until all in ->dispatch are
>    flushed
>  blk-mq-sched: introduce blk_mq_sched_queue_depth()
>  blk-mq-sched: use q->queue_depth as hint for q->nr_requests
>  block: introduce rqhash helpers
>  block: move actual bio merge code into __elv_merge
>  block: add check on elevator for supporting bio merge via hashtable
>    from blk-mq sw queue
>  block: introduce .last_merge and .hash to blk_mq_ctx
>  blk-mq-sched: refactor blk_mq_sched_try_merge()
>  blk-mq: improve bio merge from blk-mq sw queue
>=20
> block/blk-mq-debugfs.c  |   1 +
> block/blk-mq-sched.c    | 186 =
+++++++++++++++++++++++++++++++-----------------
> block/blk-mq-sched.h    |  23 ++++++
> block/blk-mq.c          |  93 +++++++++++++++++++++++-
> block/blk-mq.h          |   7 ++
> block/blk-settings.c    |   2 +
> block/blk.h             |  55 ++++++++++++++
> block/elevator.c        |  93 ++++++++++++++----------
> include/linux/blk-mq.h  |   3 +
> include/linux/sbitmap.h |  54 ++++++++++----
> 10 files changed, 399 insertions(+), 118 deletions(-)
>=20
> --=20
> 2.9.5
>=20

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance
@ 2017-09-02 15:17 Ming Lei
  2017-09-04  9:12 ` Paolo Valente
  2017-09-19 19:25 ` Omar Sandoval
  0 siblings, 2 replies; 8+ messages in thread
From: Ming Lei @ 2017-09-02 15:17 UTC (permalink / raw)
  To: Jens Axboe, linux-block, Christoph Hellwig
  Cc: Bart Van Assche, Laurence Oberman, Paolo Valente, Mel Gorman, Ming Lei

Hi,

In Red Hat internal storage test wrt. blk-mq scheduler, we
found that I/O performance is much bad with mq-deadline, especially
about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
SRP...)

Turns out one big issue causes the performance regression: requests
are still dequeued from sw queue/scheduler queue even when ldd's
queue is busy, so I/O merge becomes quite difficult to make, then
sequential IO degrades a lot.

The 1st five patches improve this situation, and brings back
some performance loss.

Patch 6 ~ 7 uses q->queue_depth as hint for setting up
scheduler queue depth.

Patch 8 ~ 15 improve bio merge via hash table in sw queue,
which makes bio merge more efficient than current approch
in which only the last 8 requests are checked. Since patch
6~14 converts to the scheduler way of dequeuing one request
from sw queue one time for SCSI device, and the times of
acquring ctx->lock is increased, and merging bio via hash
table decreases holding time of ctx->lock and should eliminate
effect from patch 14. 

With this changes, SCSI-MQ sequential I/O performance is
improved much, Paolo reported that mq-deadline performance
improved much[2] in his dbench test wrt V2. Also performanc
improvement on lpfc/qla2xx was observed with V1.[1]

Also Bart worried that this patchset may affect SRP, so provide
test data on SCSI SRP this time:

- fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
- system(16 cores, dual sockets, mem: 96G)

          |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+ 
-----------------------------------------------------
 IOPS(K)  |  DEADLINE   |    NONE     |    NONE     
-----------------------------------------------------
read      |      587.81 |      511.96 |      518.51 
-----------------------------------------------------
randread  |      116.44 |      142.99 |      142.46 
-----------------------------------------------------
write     |      580.87 |       536.4 |      582.15 
-----------------------------------------------------
randwrite |      104.95 |      124.89 |      123.99 
-----------------------------------------------------


          |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+ 
-----------------------------------------------------
 IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE  
-----------------------------------------------------
read      |      587.81 |       158.7 |      450.41 
-----------------------------------------------------
randread  |      116.44 |      142.04 |      142.72 
-----------------------------------------------------
write     |      580.87 |      136.61 |      569.37 
-----------------------------------------------------
randwrite |      104.95 |      123.14 |      124.36 
-----------------------------------------------------

*: v4.13-rc6+ means v4.13-rc6 with block for-next


Please consider to merge to V4.4.

[1] http://marc.info/?l=linux-block&m=150151989915776&w=2
[2] https://marc.info/?l=linux-block&m=150217980602843&w=2

V4:
	- add Reviewed-by tag
	- some trival change: typo fix in commit log or comment,
	variable name, no actual functional change

V3:
	- totally round robin for picking req from ctx, as suggested
	by Bart
	- remove one local variable in __sbitmap_for_each_set()
	- drop patches of single dispatch list, which can improve
	performance on mq-deadline, but cause a bit degrade on
	none because all hctxs need to be checked after ->dispatch
	is flushed. Will post it again once it is mature.
	- rebase on v4.13-rc6 with block for-next

V2:
	- dequeue request from sw queues in round roubin's style
	as suggested by Bart, and introduces one helper in sbitmap
	for this purpose
	- improve bio merge via hash table from sw queue
	- add comments about using DISPATCH_BUSY state in lockless way,
	simplifying handling on busy state,
	- hold ctx->lock when clearing ctx busy bit as suggested
	by Bart


Ming Lei (14):
  blk-mq-sched: fix scheduler bad performance
  sbitmap: introduce __sbitmap_for_each_set()
  blk-mq: introduce blk_mq_dispatch_rq_from_ctx()
  blk-mq-sched: move actual dispatching into one helper
  blk-mq-sched: improve dispatching from sw queue
  blk-mq-sched: don't dequeue request until all in ->dispatch are
    flushed
  blk-mq-sched: introduce blk_mq_sched_queue_depth()
  blk-mq-sched: use q->queue_depth as hint for q->nr_requests
  block: introduce rqhash helpers
  block: move actual bio merge code into __elv_merge
  block: add check on elevator for supporting bio merge via hashtable
    from blk-mq sw queue
  block: introduce .last_merge and .hash to blk_mq_ctx
  blk-mq-sched: refactor blk_mq_sched_try_merge()
  blk-mq: improve bio merge from blk-mq sw queue

 block/blk-mq-debugfs.c  |   1 +
 block/blk-mq-sched.c    | 186 +++++++++++++++++++++++++++++++-----------------
 block/blk-mq-sched.h    |  23 ++++++
 block/blk-mq.c          |  93 +++++++++++++++++++++++-
 block/blk-mq.h          |   7 ++
 block/blk-settings.c    |   2 +
 block/blk.h             |  55 ++++++++++++++
 block/elevator.c        |  93 ++++++++++++++----------
 include/linux/blk-mq.h  |   3 +
 include/linux/sbitmap.h |  54 ++++++++++----
 10 files changed, 399 insertions(+), 118 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-09-20  3:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-06 21:09 [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance Oleksandr Natalenko
2017-09-06 21:22 ` Tom Nguyen
  -- strict thread matches above, loose matches on Subject: below --
2017-09-02 15:17 Ming Lei
2017-09-04  9:12 ` Paolo Valente
2017-09-05  1:39   ` Ming Lei
2017-09-06 15:27     ` Ming Lei
2017-09-19 19:25 ` Omar Sandoval
2017-09-20  3:18   ` Ming Lei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.