linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Ming Lei <ming.lei@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
	Christoph Hellwig <hch@lst.de>,
	linux-block@vger.kernel.org, John Garry <john.garry@huawei.com>,
	Hannes Reinecke <hare@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: blk-mq: improvement CPU hotplug (simplified version) v3
Date: Fri, 22 May 2020 07:47:20 -0700	[thread overview]
Message-ID: <20200522144720.GC3423299@dhcp-10-100-145-180.wdl.wdc.com> (raw)
In-Reply-To: <20200522023923.GC755458@T590>

On Fri, May 22, 2020 at 10:39:23AM +0800, Ming Lei wrote:
> On Thu, May 21, 2020 at 12:15:52PM -0700, Bart Van Assche wrote:
> > On 2020-05-20 21:33, Ming Lei wrote:
> > > No.
> > > 
> > > If vector 3 is for covering hw queue 12 ~ 15, the vector shouldn't be
> > > shutdown when cpu 14 is offline.
> > >> Also I am pretty sure that we don't do this way with managed IRQ. And
> > > non-managed IRQ will be migrated to other online cpus during cpu offline,
> > > so not an issue at all. See migrate_one_irq().
> > 
> > Thanks for the pointer to migrate_one_irq().
> > 
> > However, I'm not convinced the above statement is correct. My
> > understanding is that the block driver knows which interrupt vector has
> > been associated with which hardware queue but the blk-mq core not. It
> > seems to me that patch 6/6 of this series is based on the following
> > assumptions:
> > (a) That the interrupt that is associated with a hardware queue is
> >     processed by one of the CPU's in hctx->cpumask.
> > (b) That hardware queues do not share interrupt vectors.
> > 
> > I don't think that either assumption is correct.
> 
> What the patch tries to do is just:
> 
> - when the last cpu of hctx->cpumask is going to become offline, mark
> this hctx as inactive, then drain any inflight IO requests originated
> from this hctx
> 
> The correctness is that once we stops to produce request, we can drain
> any in-flight requests before shutdown the last cpu of hctx. Then finally
> this hctx becomes quiesced completely. Do you think this way is wrong?
> If yes, please prove it.

I don't think this applies to what Bart is saying, but there is a
pathological case where things break down: if a driver uses managed
irq's, but doesn't use the same affinity for the hctx's, an offline cpu
may have been the only one providing irq handling for an online hctx.

I feel like that should be a driver bug if it were to set itself up that
way, but I don't find anything enforces that.

  reply	other threads:[~2020-05-22 14:47 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20 17:06 blk-mq: improvement CPU hotplug (simplified version) v3 Christoph Hellwig
2020-05-20 17:06 ` [PATCH 1/6] blk-mq: remove the bio argument to ->prepare_request Christoph Hellwig
2020-05-20 18:16   ` Bart Van Assche
2020-05-22  9:11   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 2/6] blk-mq: simplify the blk_mq_get_request calling convention Christoph Hellwig
2020-05-20 18:22   ` Bart Van Assche
2020-05-22  9:13   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 3/6] blk-mq: move more request initialization to blk_mq_rq_ctx_init Christoph Hellwig
2020-05-20 20:10   ` Bart Van Assche
2020-05-20 17:06 ` [PATCH 4/6] blk-mq: open code __blk_mq_alloc_request in blk_mq_alloc_request_hctx Christoph Hellwig
2020-05-22  9:17   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 5/6] blk-mq: add blk_mq_all_tag_iter Christoph Hellwig
2020-05-20 20:24   ` Bart Van Assche
2020-05-27  6:05     ` Christoph Hellwig
2020-05-22  9:18   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 6/6] blk-mq: drain I/O when all CPUs in a hctx are offline Christoph Hellwig
2020-05-22  9:25   ` Hannes Reinecke
2020-05-25  9:20     ` Ming Lei
2020-05-20 21:46 ` blk-mq: improvement CPU hotplug (simplified version) v3 Bart Van Assche
2020-05-21  2:57   ` Ming Lei
2020-05-21  3:50     ` Bart Van Assche
2020-05-21  4:33       ` Ming Lei
2020-05-21 19:15         ` Bart Van Assche
2020-05-22  2:39           ` Ming Lei
2020-05-22 14:47             ` Keith Busch [this message]
2020-05-23  3:05               ` Ming Lei
2020-05-23 15:19             ` Bart Van Assche
2020-05-25  4:09               ` Ming Lei
2020-05-25 15:32                 ` Bart Van Assche
2020-05-25 16:38                   ` Keith Busch
2020-05-26  0:37                   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200522144720.GC3423299@dhcp-10-100-145-180.wdl.wdc.com \
    --to=kbusch@kernel.org \
    --cc=bvanassche@acm.org \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).