All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Daniel Wagner <dwagner@suse.de>, Wen Xiong <wenxiong@us.ibm.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx
Date: Wed, 30 Jun 2021 17:53:17 +0800	[thread overview]
Message-ID: <YNw/DcxIIMeg/2VK@T590> (raw)
In-Reply-To: <c1de513a-5477-9d1d-0ddc-24e9166cc717@suse.de>

On Wed, Jun 30, 2021 at 11:43:41AM +0200, Hannes Reinecke wrote:
> On 6/30/21 10:42 AM, Ming Lei wrote:
> > On Wed, Jun 30, 2021 at 10:18:37AM +0200, Hannes Reinecke wrote:
> > > On 6/29/21 9:49 AM, Ming Lei wrote:
> > > > Hi,
> > > > 
> > > > blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
> > > > io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
> > > > However, all cpus in hctx->cpumask may be offline.
> > > > 
> > > > This usage model isn't well supported by blk-mq which supposes allocator is
> > > > always done on one online CPU in hctx->cpumask. This assumption is
> > > > related with managed irq, which also requires blk-mq to drain inflight
> > > > request in this hctx when the last cpu in hctx->cpumask is going to
> > > > offline.
> > > > 
> > > > However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
> > > > them to ask for request allocation when the specified hctx is inactive
> > > > (all cpus in hctx->cpumask are offline).
> > > > 
> > > > Fix blk_mq_alloc_request_hctx() by adding/passing flag of
> > > > BLK_MQ_F_NOT_USE_MANAGED_IRQ.
> > > > 
> > > > 
> > > > Ming Lei (2):
> > > >     blk-mq: not deactivate hctx if the device doesn't use managed irq
> > > >     nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop
> > > > 
> > > >    block/blk-mq.c             | 6 +++++-
> > > >    drivers/nvme/host/fc.c     | 3 ++-
> > > >    drivers/nvme/host/rdma.c   | 3 ++-
> > > >    drivers/nvme/host/tcp.c    | 3 ++-
> > > >    drivers/nvme/target/loop.c | 3 ++-
> > > >    include/linux/blk-mq.h     | 1 +
> > > >    6 files changed, 14 insertions(+), 5 deletions(-)
> > > > 
> > > > Cc: Sagi Grimberg <sagi@grimberg.me>
> > > > Cc: Daniel Wagner <dwagner@suse. thede>
> > > > Cc: Wen Xiong <wenxiong@us.ibm.com>
> > > > Cc: John Garry <john.garry@huawei.com>
> > > > 
> > > > 
> > > I have my misgivings about this patchset.
> > > To my understanding, only CPUs present in the hctx cpumask are eligible to
> > > submit I/O to that hctx.
> > 
> > It is just true for managed irq, and should be CPUs online.
> > 
> > However, no such constraint for non managed irq, since irq may migrate to
> > other online CPUs if all CPUs in irq's current affinity become offline.
> > 
> 
> But there shouldn't be any I/O pending during CPU offline (cf
> blk_mq_hctx_notify_offline()), so no interrupts should be triggered, either,
> no?
> 
> > > Consequently if all cpus in that mask are offline, where is the point of
> > > even transmitting a 'connect' request?
> > 
> > nvmef requires to submit the connect request via one specified hctx
> > which index has to be same with the io queue's index.
> > 
> > Almost all nvmef drivers fail to setup controller in case of
> > connect io queue error.
> > 
> 
> And I would prefer to fix that, namely allowing blk-mq to run on a sparse
> set of io queues.
> The remaining io queues can be connected once the first cpu in the hctx
> cpumask is onlined; we already have blk_mq_hctx_notify_online(), which could
> easily be expanded to connect to relevant I/O queue...

Then you need a big patches for doing that.

> 
> > Also CPU can become offline & online, especially it is done in
> > lots of sanity test.
> > 
> 
> True, but then again all I/O on the hctx should be quiesced during cpu
> offline.

Again that is only necessary for managed irq.

> 
> > So we should allow to allocate the connect request successful, and
> > submit it to drivers given it is allowed in this way for non-managed
> > irq.
> > 
> 
> I'd rather not do this, as the 'connect' command runs on the 'normal' I/O
> tagset, and hence runs into the risk of being issues against non-existing
> CPUs.

Can you explain what the risk is?

> 
> > > Shouldn't we rather modify the tagset to only refer to the current online
> > > CPUs _only_, thereby never submit a connect request for hctx with only
> > > offline CPUs?
> > 
> > Then you may setup very less io queues, and performance may suffer even
> > though lots of CPUs become online later.
> > ;
> Only if we stay with the reduced number of I/O queues. Which is not what I'm
> proposing; I'd rather prefer to connect and disconnect queues from the cpu
> hotplug handler. For starters we could even trigger a reset once the first
> cpu within a hctx is onlined.

Yeah, that need one big/complicated patchset, but not see any advantages
over this simple approach.


Thanks,
Ming


WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Daniel Wagner <dwagner@suse.de>, Wen Xiong <wenxiong@us.ibm.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx
Date: Wed, 30 Jun 2021 17:53:17 +0800	[thread overview]
Message-ID: <YNw/DcxIIMeg/2VK@T590> (raw)
In-Reply-To: <c1de513a-5477-9d1d-0ddc-24e9166cc717@suse.de>

On Wed, Jun 30, 2021 at 11:43:41AM +0200, Hannes Reinecke wrote:
> On 6/30/21 10:42 AM, Ming Lei wrote:
> > On Wed, Jun 30, 2021 at 10:18:37AM +0200, Hannes Reinecke wrote:
> > > On 6/29/21 9:49 AM, Ming Lei wrote:
> > > > Hi,
> > > > 
> > > > blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
> > > > io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
> > > > However, all cpus in hctx->cpumask may be offline.
> > > > 
> > > > This usage model isn't well supported by blk-mq which supposes allocator is
> > > > always done on one online CPU in hctx->cpumask. This assumption is
> > > > related with managed irq, which also requires blk-mq to drain inflight
> > > > request in this hctx when the last cpu in hctx->cpumask is going to
> > > > offline.
> > > > 
> > > > However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
> > > > them to ask for request allocation when the specified hctx is inactive
> > > > (all cpus in hctx->cpumask are offline).
> > > > 
> > > > Fix blk_mq_alloc_request_hctx() by adding/passing flag of
> > > > BLK_MQ_F_NOT_USE_MANAGED_IRQ.
> > > > 
> > > > 
> > > > Ming Lei (2):
> > > >     blk-mq: not deactivate hctx if the device doesn't use managed irq
> > > >     nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop
> > > > 
> > > >    block/blk-mq.c             | 6 +++++-
> > > >    drivers/nvme/host/fc.c     | 3 ++-
> > > >    drivers/nvme/host/rdma.c   | 3 ++-
> > > >    drivers/nvme/host/tcp.c    | 3 ++-
> > > >    drivers/nvme/target/loop.c | 3 ++-
> > > >    include/linux/blk-mq.h     | 1 +
> > > >    6 files changed, 14 insertions(+), 5 deletions(-)
> > > > 
> > > > Cc: Sagi Grimberg <sagi@grimberg.me>
> > > > Cc: Daniel Wagner <dwagner@suse. thede>
> > > > Cc: Wen Xiong <wenxiong@us.ibm.com>
> > > > Cc: John Garry <john.garry@huawei.com>
> > > > 
> > > > 
> > > I have my misgivings about this patchset.
> > > To my understanding, only CPUs present in the hctx cpumask are eligible to
> > > submit I/O to that hctx.
> > 
> > It is just true for managed irq, and should be CPUs online.
> > 
> > However, no such constraint for non managed irq, since irq may migrate to
> > other online CPUs if all CPUs in irq's current affinity become offline.
> > 
> 
> But there shouldn't be any I/O pending during CPU offline (cf
> blk_mq_hctx_notify_offline()), so no interrupts should be triggered, either,
> no?
> 
> > > Consequently if all cpus in that mask are offline, where is the point of
> > > even transmitting a 'connect' request?
> > 
> > nvmef requires to submit the connect request via one specified hctx
> > which index has to be same with the io queue's index.
> > 
> > Almost all nvmef drivers fail to setup controller in case of
> > connect io queue error.
> > 
> 
> And I would prefer to fix that, namely allowing blk-mq to run on a sparse
> set of io queues.
> The remaining io queues can be connected once the first cpu in the hctx
> cpumask is onlined; we already have blk_mq_hctx_notify_online(), which could
> easily be expanded to connect to relevant I/O queue...

Then you need a big patches for doing that.

> 
> > Also CPU can become offline & online, especially it is done in
> > lots of sanity test.
> > 
> 
> True, but then again all I/O on the hctx should be quiesced during cpu
> offline.

Again that is only necessary for managed irq.

> 
> > So we should allow to allocate the connect request successful, and
> > submit it to drivers given it is allowed in this way for non-managed
> > irq.
> > 
> 
> I'd rather not do this, as the 'connect' command runs on the 'normal' I/O
> tagset, and hence runs into the risk of being issues against non-existing
> CPUs.

Can you explain what the risk is?

> 
> > > Shouldn't we rather modify the tagset to only refer to the current online
> > > CPUs _only_, thereby never submit a connect request for hctx with only
> > > offline CPUs?
> > 
> > Then you may setup very less io queues, and performance may suffer even
> > though lots of CPUs become online later.
> > ;
> Only if we stay with the reduced number of I/O queues. Which is not what I'm
> proposing; I'd rather prefer to connect and disconnect queues from the cpu
> hotplug handler. For starters we could even trigger a reset once the first
> cpu within a hctx is onlined.

Yeah, that need one big/complicated patchset, but not see any advantages
over this simple approach.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-06-30  9:53 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29  7:49 [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-06-29  7:49 ` Ming Lei
2021-06-29  7:49 ` [PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq Ming Lei
2021-06-29  7:49   ` Ming Lei
2021-06-29 12:39   ` Hannes Reinecke
2021-06-29 12:39     ` Hannes Reinecke
2021-06-29 14:17     ` Ming Lei
2021-06-29 14:17       ` Ming Lei
2021-06-29 15:49   ` John Garry
2021-06-29 15:49     ` John Garry
2021-06-30  0:32     ` Ming Lei
2021-06-30  0:32       ` Ming Lei
2021-06-30  9:25       ` John Garry
2021-06-30  9:25         ` John Garry
2021-07-01  9:52       ` Christoph Hellwig
2021-07-01  9:52         ` Christoph Hellwig
2021-06-29 23:30   ` Damien Le Moal
2021-06-29 23:30     ` Damien Le Moal
2021-06-30 18:58     ` Sagi Grimberg
2021-06-30 18:58       ` Sagi Grimberg
2021-06-30 21:57       ` Damien Le Moal
2021-06-30 21:57         ` Damien Le Moal
2021-07-01 14:20         ` Keith Busch
2021-07-01 14:20           ` Keith Busch
2021-06-29  7:49 ` [PATCH 2/2] nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop Ming Lei
2021-06-29  7:49   ` Ming Lei
2021-06-30  8:15   ` Hannes Reinecke
2021-06-30  8:15     ` Hannes Reinecke
2021-06-30  8:47     ` Ming Lei
2021-06-30  8:47       ` Ming Lei
2021-06-30  8:18 ` [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Hannes Reinecke
2021-06-30  8:18   ` Hannes Reinecke
2021-06-30  8:42   ` Ming Lei
2021-06-30  8:42     ` Ming Lei
2021-06-30  9:43     ` Hannes Reinecke
2021-06-30  9:43       ` Hannes Reinecke
2021-06-30  9:53       ` Ming Lei [this message]
2021-06-30  9:53         ` Ming Lei
2021-06-30 18:59         ` Sagi Grimberg
2021-06-30 18:59           ` Sagi Grimberg
2021-06-30 19:46           ` Hannes Reinecke
2021-06-30 19:46             ` Hannes Reinecke
2021-06-30 23:59             ` Ming Lei
2021-06-30 23:59               ` Ming Lei
2021-07-01  8:00               ` Hannes Reinecke
2021-07-01  8:00                 ` Hannes Reinecke
2021-07-01  9:13                 ` Ming Lei
2021-07-01  9:13                   ` Ming Lei
2021-07-02  9:47             ` Daniel Wagner
2021-07-02  9:47               ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YNw/DcxIIMeg/2VK@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=wenxiong@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.