linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug
       [not found] ` <20190812134608.GA16803@ming.t460p>
@ 2019-08-12 16:21   ` John Garry
  2019-08-12 22:45     ` Ming Lei
  0 siblings, 1 reply; 2+ messages in thread
From: John Garry @ 2019-08-12 16:21 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Minwoo Im, Bart Van Assche, Hannes Reinecke,
	Christoph Hellwig, Thomas Gleixner, Keith Busch, chenxiang,
	linux-scsi

On 12/08/2019 14:46, Ming Lei wrote:
> Hi John,
>
> On Mon, Aug 12, 2019 at 09:43:07PM +0800, Ming Lei wrote:
>> Hi,
>>
>> Thomas mentioned:
>>     "
>>      That was the constraint of managed interrupts from the very beginning:
>>
>>       The driver/subsystem has to quiesce the interrupt line and the associated
>>       queue _before_ it gets shutdown in CPU unplug and not fiddle with it
>>       until it's restarted by the core when the CPU is plugged in again.
>>     "
>>
>> But no drivers or blk-mq do that before one hctx becomes dead(all
>> CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
>> to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
>>
>> This patchset tries to address the issue by two stages:
>>
>> 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
>>
>> - mark the hctx as internal stopped, and drain all in-flight requests
>> if the hctx is going to be dead.
>>
>> 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
>>
>> - steal bios from the request, and resubmit them via generic_make_request(),
>> then these IO will be mapped to other live hctx for dispatch
>>
>> Please comment & review, thanks!
>>
>> V2:
>> 	- patch4 & patch 5 in V1 have been merged to block tree, so remove
>> 	  them
>> 	- address comments from John Garry and Minwoo
>>
>>
>> Ming Lei (5):
>>   blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED
>>   blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ
>>   blk-mq: stop to handle IO before hctx's all CPUs become offline
>>   blk-mq: re-submit IO in case that hctx is dead
>>   blk-mq: handle requests dispatched from IO scheduler in case that hctx
>>     is dead
>>
>>  block/blk-mq-debugfs.c     |   2 +
>>  block/blk-mq-tag.c         |   2 +-
>>  block/blk-mq-tag.h         |   2 +
>>  block/blk-mq.c             | 143 +++++++++++++++++++++++++++++++++----
>>  block/blk-mq.h             |   3 +-
>>  drivers/block/loop.c       |   2 +-
>>  drivers/md/dm-rq.c         |   2 +-
>>  include/linux/blk-mq.h     |   5 ++
>>  include/linux/cpuhotplug.h |   1 +
>>  9 files changed, 146 insertions(+), 16 deletions(-)
>>
>> Cc: Bart Van Assche <bvanassche@acm.org>
>> Cc: Hannes Reinecke <hare@suse.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Keith Busch <keith.busch@intel.com>
>> --
>> 2.20.1
>>
>
> Sorry for forgetting to Cc you.

Already subscribed :)

I don't mean to hijack this thread, but JFYI we're getting around to 
test https://github.com/ming1/linux/commits/v5.2-rc-host-tags-V2 - 
unfortunately we're still seeing a performance regression. I can't see 
where it's coming from. We're double-checking the test though.

Thanks,
John

>
>
> Thanks,
> Ming
>
> .
>



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug
  2019-08-12 16:21   ` [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug John Garry
@ 2019-08-12 22:45     ` Ming Lei
  0 siblings, 0 replies; 2+ messages in thread
From: Ming Lei @ 2019-08-12 22:45 UTC (permalink / raw)
  To: John Garry
  Cc: Jens Axboe, linux-block, Minwoo Im, Bart Van Assche,
	Hannes Reinecke, Christoph Hellwig, Thomas Gleixner, Keith Busch,
	chenxiang, linux-scsi

On Mon, Aug 12, 2019 at 05:21:44PM +0100, John Garry wrote:
> On 12/08/2019 14:46, Ming Lei wrote:
> > Hi John,
> > 
> > On Mon, Aug 12, 2019 at 09:43:07PM +0800, Ming Lei wrote:
> > > Hi,
> > > 
> > > Thomas mentioned:
> > >     "
> > >      That was the constraint of managed interrupts from the very beginning:
> > > 
> > >       The driver/subsystem has to quiesce the interrupt line and the associated
> > >       queue _before_ it gets shutdown in CPU unplug and not fiddle with it
> > >       until it's restarted by the core when the CPU is plugged in again.
> > >     "
> > > 
> > > But no drivers or blk-mq do that before one hctx becomes dead(all
> > > CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
> > > to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
> > > 
> > > This patchset tries to address the issue by two stages:
> > > 
> > > 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
> > > 
> > > - mark the hctx as internal stopped, and drain all in-flight requests
> > > if the hctx is going to be dead.
> > > 
> > > 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
> > > 
> > > - steal bios from the request, and resubmit them via generic_make_request(),
> > > then these IO will be mapped to other live hctx for dispatch
> > > 
> > > Please comment & review, thanks!
> > > 
> > > V2:
> > > 	- patch4 & patch 5 in V1 have been merged to block tree, so remove
> > > 	  them
> > > 	- address comments from John Garry and Minwoo
> > > 
> > > 
> > > Ming Lei (5):
> > >   blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED
> > >   blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ
> > >   blk-mq: stop to handle IO before hctx's all CPUs become offline
> > >   blk-mq: re-submit IO in case that hctx is dead
> > >   blk-mq: handle requests dispatched from IO scheduler in case that hctx
> > >     is dead
> > > 
> > >  block/blk-mq-debugfs.c     |   2 +
> > >  block/blk-mq-tag.c         |   2 +-
> > >  block/blk-mq-tag.h         |   2 +
> > >  block/blk-mq.c             | 143 +++++++++++++++++++++++++++++++++----
> > >  block/blk-mq.h             |   3 +-
> > >  drivers/block/loop.c       |   2 +-
> > >  drivers/md/dm-rq.c         |   2 +-
> > >  include/linux/blk-mq.h     |   5 ++
> > >  include/linux/cpuhotplug.h |   1 +
> > >  9 files changed, 146 insertions(+), 16 deletions(-)
> > > 
> > > Cc: Bart Van Assche <bvanassche@acm.org>
> > > Cc: Hannes Reinecke <hare@suse.com>
> > > Cc: Christoph Hellwig <hch@lst.de>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>
> > > Cc: Keith Busch <keith.busch@intel.com>
> > > --
> > > 2.20.1
> > > 
> > 
> > Sorry for forgetting to Cc you.
> 
> Already subscribed :)
> 
> I don't mean to hijack this thread, but JFYI we're getting around to test
> https://github.com/ming1/linux/commits/v5.2-rc-host-tags-V2 - unfortunately
> we're still seeing a performance regression. I can't see where it's coming
> from. We're double-checking the test though.

host-tag patchset is only for several particular drivers which use
private reply queue as completion queue.

This patchset is for handling generic blk-mq CPU hotplug issue, and
the several particular scsi drivers(hisi_sas_v3, hpsa, megaraid_sas and
mp3sas) won't be covered so far.

I'd suggest to move on for generic blk-mq devices first given now blk-mq
is the only request IO path now.

There are at least two choices for us to handle drivers/devices with
private completion queue:

1) host-tags
- performance issue shouldn't be hard to solve, given it is same with
with single tags in theory, and just corner cases is there.

What I am not glad with this approach is that blk-mq-tag code becomes mess.

2) private callback
- we could define private callback to drain each completion queue in
  driver simply.
- problem is that the four drivers have to duplicate the same job


Thanks,
Ming

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-08-12 22:45 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190812134312.16732-1-ming.lei@redhat.com>
     [not found] ` <20190812134608.GA16803@ming.t460p>
2019-08-12 16:21   ` [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug John Garry
2019-08-12 22:45     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).