linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, <linux-block@vger.kernel.org>,
	"Christoph Hellwig" <hch@lst.de>,
	Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	Keith Busch <keith.busch@intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	chenxiang <chenxiang66@hisilicon.com>
Subject: Re: [PATCH] blk-mq: Wait for for hctx inflight requests on CPU unplug
Date: Wed, 22 May 2019 10:06:16 +0100	[thread overview]
Message-ID: <ce014369-4bf2-55fe-3c0f-3a46d3a016dc@huawei.com> (raw)
In-Reply-To: <20190522015620.GA11959@ming.t460p>

>>>
>>> +static int blk_mq_hctx_notify_prepare(unsigned int cpu, struct hlist_node *node)
>>> +{
>>> +	struct blk_mq_hw_ctx	*hctx;
>>> +	struct blk_mq_tags	*tags;
>>> +
>>> +	hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_dead);
>>> +	tags = hctx->tags;
>>> +
>>> +	if (tags)
>>> +		clear_bit(BLK_MQ_TAGS_DRAINED, &tags->flags);
>>> +
>>
>> Hi Ming,
>>
>> Thanks for the effort here.
>>
>> I would like to make an assertion on a related topic, which I hope you can
>> comment on:
>>
>> For this drain mechanism to work, the blk_mq_hw_ctx’s (and related cpu
>> masks) for a request queue are required to match the hw queues used in the
>> LLDD (if using managed interrupts).
>>
>> In others words, a SCSI LLDD needs to expose all hw queues for this to work.
>>
>> The reason I say this is because if the LLDD does not expose the hw queues
>> and manages them internally - as some SCSI LLDDs do - yet uses managed
>> interrupts to spread the hw queue MSI vectors across all CPUs, then we still
>> only have a single blk_mq_hw_ctx per rq with a cpumask covering all cpus,
>> which is not what we would want.
>

Hi Ming,

> Good catch!
>
> This drain mechanism won't address the issue for these SCSI LLDDs in which:
>
> 	1) blk_mq_hw_ctx serves as submission hw queue
>
> 	2) one private reply queue serves as completion queue, for which one
> 	MSI vector with cpumask is setup via pci_alloc_irq_vectors_affinity(PCI_IRQ_AFFINITY).
>
> What we should only drain is the completion queue if all its mapped
> CPUs are offline.
>
> Looks you suggest to expose all completion(reply) queues as 'struct blk_mq_hw_ctx',
> which may involve in another more hard problem:  how to split the single
> hostwide tags into each reply queue.

Yes, and this is what I expecting to hear Re. hostwide tags.

I'd rather not work towards that
> direction because:
>
> 1) it is very hard to partition global resources into several parts,
> especially it is hard to make every part happy.
>
> 2) sbitmap is smart/efficient enough for this global allocation
>
> 3) no obvious improvement is obtained from the resource partition, according
> to previous experiment result done by Kashyap.

I'd like to also do the test.

However I would need to forward port the patchset, which no longer 
cleanly applies (I was referring to this 
https://lore.kernel.org/linux-block/20180205152035.15016-1-ming.lei@redhat.com/). 
Any help with that would be appreciated.

>
> I think we could implement the drain mechanism in the following way:
>
> 1) if 'struct blk_mq_hw_ctx' serves as completion queue, use the
> approach in the patch

Maybe the gain of exposing multiple queues+managed interrupts outweighs 
the loss in the LLDD of having to generate this unique tag with sbitmap; 
I know that we did not use sbitmap ever in the LLDD for generating the 
tag when testing previously. However I'm still not too hopeful.

>
> 2) otherwise:
> - introduce one callbcack of .prep_queue_dead(hctx, down_cpu) to
> 'struct blk_mq_ops'

This would not be allowed to block, right?

>
> - call .prep_queue_dead from blk_mq_hctx_notify_dead()
>
> 3) inside .prep_queue_dead():
> - the driver checks if all mapped CPU on the completion queue is offline
> - if yes, wait for in-flight requests originated from all CPUs mapped to
> this completion queue, and it can be implemented as one block layer API

That could work. However I think that someone may ask why the LLDD just 
doesn't register for the CPU hotplug event itself (which I would really 
rather avoid), instead of being relayed the info from the block layer.

>
> Any comments on the above approach?
>
> Thanks,
> Ming

Cheers,
John

>
> .
>



  reply	other threads:[~2019-05-22  9:06 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-17  9:14 [PATCH] blk-mq: Wait for for hctx inflight requests on CPU unplug Ming Lei
2019-05-21  7:40 ` Christoph Hellwig
2019-05-21  8:03   ` Ming Lei
2019-05-21 13:50 ` John Garry
2019-05-22  1:56   ` Ming Lei
2019-05-22  9:06     ` John Garry [this message]
2019-05-22  9:47       ` Hannes Reinecke
2019-05-22 10:31         ` Ming Lei
2019-05-22 12:30         ` John Garry
2019-05-22 10:01       ` Ming Lei
2019-05-22 12:21         ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ce014369-4bf2-55fe-3c0f-3a46d3a016dc@huawei.com \
    --to=john.garry@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=kashyap.desai@broadcom.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).