linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Hannes Reinecke <hare@suse.de>, Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, <linux-block@vger.kernel.org>,
	"Christoph Hellwig" <hch@lst.de>,
	Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	Keith Busch <keith.busch@intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	chenxiang <chenxiang66@hisilicon.com>
Subject: Re: [PATCH] blk-mq: Wait for for hctx inflight requests on CPU unplug
Date: Wed, 22 May 2019 13:30:31 +0100	[thread overview]
Message-ID: <11bb0171-d5a2-1faa-6fd6-6204b5a56cfc@huawei.com> (raw)
In-Reply-To: <1deeda32-eac2-9056-f17b-3a643e671374@suse.de>

>
> But I still do think we need to handle this case; the HBA might not
> expose enough MSI-X vectors/hw queues for us to map to all CPUs.
> In which case we'd be running into the same situation.
>
> And I do think we _need_ to drain the associated completion queue as
> soon as _any_ CPU in that set it plugged; otherwise we can't ensure that
> any interrupt for pending I/O will _not_ arrive at the dead CPU.

Really? I did not think that it was possible for this to happen.

>
> And yes, this would amount to quiesce the HBA completely if only one
> queue is exposed. But there's no way around this; the alternative would
> be to code a fallback patch in each driver to catch missing completions.
> Which would actually be an interface change, requiring each vendor /
> maintainer to change their driver. Not very nice.
>
>>> Looks you suggest to expose all completion(reply) queues as 'struct
>>> blk_mq_hw_ctx',
>>> which may involve in another more hard problem:  how to split the single
>>> hostwide tags into each reply queue.
>>
>> Yes, and this is what I expecting to hear Re. hostwide tags.
>>
> But this case is handled already; things like lpfc and qla2xxx have been
> converted to this model (exposing all hw queues, and use a host-wide
> tagmap).
>
> So from that side there is not really an issue.
>
> I even provided patchset to convert megaraid_sas (cf 'megaraid_sas:
> enable blk-mq for fusion'); you might want to have a look there to see
> how it can be done.

ok, I'll have a search.

>
>> I'd rather not work towards that
>>> direction because:
>>>
>>> 1) it is very hard to partition global resources into several parts,
>>> especially it is hard to make every part happy.
>>>
>>> 2) sbitmap is smart/efficient enough for this global allocation
>>>
>>> 3) no obvious improvement is obtained from the resource partition,
>>> according
>>> to previous experiment result done by Kashyap.
>>
>> I'd like to also do the test.
>>
>> However I would need to forward port the patchset, which no longer
>> cleanly applies (I was referring to this
>> https://lore.kernel.org/linux-block/20180205152035.15016-1-ming.lei@redhat.com/).
>> Any help with that would be appreciated.
>>
> If you would post it on the mailing list (or send it to me) I can have a
> look. Converting SAS is on my list of things to do, anyway.
>

ok

>>>
>>> I think we could implement the drain mechanism in the following way:
>>>
>>> 1) if 'struct blk_mq_hw_ctx' serves as completion queue, use the
>>> approach in the patch
>>
>> Maybe the gain of exposing multiple queues+managed interrupts
>> outweighs the loss in the LLDD of having to generate this unique tag
>> with sbitmap; I know that we did not use sbitmap ever in the LLDD for
>> generating the tag when testing previously. However I'm still not too
>> hopeful.
>>
> Thing is, the tag _is_ already generated by the time the command is
> passed to the LLDD. So there is no overhead; you just need to establish
> a 1:1 mapping between SCSI cmds from the midlayer and your internal
> commands.
>
> Which is where the problem starts: if you have to use the same command
> pool for internal commands you have to set some tags aside to avoid a
> clash with the tags generated from the block layer.
> That's easily done, but if you do that quiescing is getting harder, as
> then the block layer wouldn't know about these internal commands.
> This is what I'm trying to address with my patchset to use private tags
> in SCSI, as then the block layer maintains all tags, and is able to
> figure out if the queue really is quiesced.
> (And I really need to post my patchset).

Ack

>
>>>
>>> 2) otherwise:
>>> - introduce one callbcack of .prep_queue_dead(hctx, down_cpu) to
>>> 'struct blk_mq_ops'
>>
>> This would not be allowed to block, right?
>>
>>>
>>> - call .prep_queue_dead from blk_mq_hctx_notify_dead()
>>>
>>> 3) inside .prep_queue_dead():
>>> - the driver checks if all mapped CPU on the completion queue is offline
>>> - if yes, wait for in-flight requests originated from all CPUs mapped to
>>> this completion queue, and it can be implemented as one block layer API
>>
>> That could work. However I think that someone may ask why the LLDD
>> just doesn't register for the CPU hotplug event itself (which I would
>> really rather avoid), instead of being relayed the info from the block
>> layer.
>>
> Again; what would you do if not all CPUs from a pool are gone?
> You still might be getting interrupts for non-associated interrupts, and
> quite some drivers are unhappy under these circumstances.
> Hence I guess it'll be better to quiesce the queue as soon as _any_ CPU
> from the pool is gone.
>
> Plus we could be doing this from the block layer without any callbacks
> from the driver...
>
> Cheers,
>
> Hannes

Thanks,
John




  parent reply	other threads:[~2019-05-22 12:30 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-17  9:14 [PATCH] blk-mq: Wait for for hctx inflight requests on CPU unplug Ming Lei
2019-05-21  7:40 ` Christoph Hellwig
2019-05-21  8:03   ` Ming Lei
2019-05-21 13:50 ` John Garry
2019-05-22  1:56   ` Ming Lei
2019-05-22  9:06     ` John Garry
2019-05-22  9:47       ` Hannes Reinecke
2019-05-22 10:31         ` Ming Lei
2019-05-22 12:30         ` John Garry [this message]
2019-05-22 10:01       ` Ming Lei
2019-05-22 12:21         ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=11bb0171-d5a2-1faa-6fd6-6204b5a56cfc@huawei.com \
    --to=john.garry@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kashyap.desai@broadcom.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).