Linux-Block Archive on lore.kernel.org
 help / color / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Keith Busch <kbusch@kernel.org>
Cc: Bart Van Assche <bvanassche@acm.org>,
	Christoph Hellwig <hch@lst.de>,
	linux-block@vger.kernel.org, John Garry <john.garry@huawei.com>,
	Hannes Reinecke <hare@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: blk-mq: improvement CPU hotplug (simplified version) v3
Date: Sat, 23 May 2020 11:05:43 +0800
Message-ID: <20200523030543.GA786407@T590> (raw)
In-Reply-To: <20200522144720.GC3423299@dhcp-10-100-145-180.wdl.wdc.com>

On Fri, May 22, 2020 at 07:47:20AM -0700, Keith Busch wrote:
> On Fri, May 22, 2020 at 10:39:23AM +0800, Ming Lei wrote:
> > On Thu, May 21, 2020 at 12:15:52PM -0700, Bart Van Assche wrote:
> > > On 2020-05-20 21:33, Ming Lei wrote:
> > > > No.
> > > > 
> > > > If vector 3 is for covering hw queue 12 ~ 15, the vector shouldn't be
> > > > shutdown when cpu 14 is offline.
> > > >> Also I am pretty sure that we don't do this way with managed IRQ. And
> > > > non-managed IRQ will be migrated to other online cpus during cpu offline,
> > > > so not an issue at all. See migrate_one_irq().
> > > 
> > > Thanks for the pointer to migrate_one_irq().
> > > 
> > > However, I'm not convinced the above statement is correct. My
> > > understanding is that the block driver knows which interrupt vector has
> > > been associated with which hardware queue but the blk-mq core not. It
> > > seems to me that patch 6/6 of this series is based on the following
> > > assumptions:
> > > (a) That the interrupt that is associated with a hardware queue is
> > >     processed by one of the CPU's in hctx->cpumask.
> > > (b) That hardware queues do not share interrupt vectors.
> > > 
> > > I don't think that either assumption is correct.
> > 
> > What the patch tries to do is just:
> > 
> > - when the last cpu of hctx->cpumask is going to become offline, mark
> > this hctx as inactive, then drain any inflight IO requests originated
> > from this hctx
> > 
> > The correctness is that once we stops to produce request, we can drain
> > any in-flight requests before shutdown the last cpu of hctx. Then finally
> > this hctx becomes quiesced completely. Do you think this way is wrong?
> > If yes, please prove it.
> 
> I don't think this applies to what Bart is saying, but there is a
> pathological case where things break down: if a driver uses managed
> irq's, but doesn't use the same affinity for the hctx's, an offline cpu
> may have been the only one providing irq handling for an online hctx.

Driver needs to keep managed interrupt alive when this hctx is active,
and blk-mq doesn't have knowledge of managed interrupt & its affinity.

For the non-normal managed-irq usage, that won't be fixed by this
patchset, and it isn't blk-mq's responsibility to cover that.

Not to mention, Bart didn't share any such example.

> 
> I feel like that should be a driver bug if it were to set itself up that
> way, but I don't find anything enforces that.

Right, that is driver issue. And only driver has the knowledge of interrupt &
its affinity stuff, and such enforcement shouldn't be done in blk-mq.


Thanks,
Ming


  reply index

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20 17:06 Christoph Hellwig
2020-05-20 17:06 ` [PATCH 1/6] blk-mq: remove the bio argument to ->prepare_request Christoph Hellwig
2020-05-20 18:16   ` Bart Van Assche
2020-05-22  9:11   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 2/6] blk-mq: simplify the blk_mq_get_request calling convention Christoph Hellwig
2020-05-20 18:22   ` Bart Van Assche
2020-05-22  9:13   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 3/6] blk-mq: move more request initialization to blk_mq_rq_ctx_init Christoph Hellwig
2020-05-20 20:10   ` Bart Van Assche
2020-05-20 17:06 ` [PATCH 4/6] blk-mq: open code __blk_mq_alloc_request in blk_mq_alloc_request_hctx Christoph Hellwig
2020-05-22  9:17   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 5/6] blk-mq: add blk_mq_all_tag_iter Christoph Hellwig
2020-05-20 20:24   ` Bart Van Assche
2020-05-27  6:05     ` Christoph Hellwig
2020-05-22  9:18   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 6/6] blk-mq: drain I/O when all CPUs in a hctx are offline Christoph Hellwig
2020-05-22  9:25   ` Hannes Reinecke
2020-05-25  9:20     ` Ming Lei
2020-05-20 21:46 ` blk-mq: improvement CPU hotplug (simplified version) v3 Bart Van Assche
2020-05-21  2:57   ` Ming Lei
2020-05-21  3:50     ` Bart Van Assche
2020-05-21  4:33       ` Ming Lei
2020-05-21 19:15         ` Bart Van Assche
2020-05-22  2:39           ` Ming Lei
2020-05-22 14:47             ` Keith Busch
2020-05-23  3:05               ` Ming Lei [this message]
2020-05-23 15:19             ` Bart Van Assche
2020-05-25  4:09               ` Ming Lei
2020-05-25 15:32                 ` Bart Van Assche
2020-05-25 16:38                   ` Keith Busch
2020-05-26  0:37                   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200523030543.GA786407@T590 \
    --to=ming.lei@redhat.com \
    --cc=bvanassche@acm.org \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-Block Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-block/0 linux-block/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-block linux-block/ https://lore.kernel.org/linux-block \
		linux-block@vger.kernel.org
	public-inbox-index linux-block

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-block


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git