All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>, Daniel Wagner <dwagner@suse.de>,
	Wen Xiong <wenxiong@us.ibm.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx
Date: Wed, 30 Jun 2021 10:18:37 +0200	[thread overview]
Message-ID: <5f304121-38ce-034b-2d17-93d136c77fe6@suse.de> (raw)
In-Reply-To: <20210629074951.1981284-1-ming.lei@redhat.com>

On 6/29/21 9:49 AM, Ming Lei wrote:
> Hi,
> 
> blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
> io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
> However, all cpus in hctx->cpumask may be offline.
> 
> This usage model isn't well supported by blk-mq which supposes allocator is
> always done on one online CPU in hctx->cpumask. This assumption is
> related with managed irq, which also requires blk-mq to drain inflight
> request in this hctx when the last cpu in hctx->cpumask is going to
> offline.
> 
> However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
> them to ask for request allocation when the specified hctx is inactive
> (all cpus in hctx->cpumask are offline).
> 
> Fix blk_mq_alloc_request_hctx() by adding/passing flag of
> BLK_MQ_F_NOT_USE_MANAGED_IRQ.
> 
> 
> Ming Lei (2):
>    blk-mq: not deactivate hctx if the device doesn't use managed irq
>    nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop
> 
>   block/blk-mq.c             | 6 +++++-
>   drivers/nvme/host/fc.c     | 3 ++-
>   drivers/nvme/host/rdma.c   | 3 ++-
>   drivers/nvme/host/tcp.c    | 3 ++-
>   drivers/nvme/target/loop.c | 3 ++-
>   include/linux/blk-mq.h     | 1 +
>   6 files changed, 14 insertions(+), 5 deletions(-)
> 
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Daniel Wagner <dwagner@suse. thede>
> Cc: Wen Xiong <wenxiong@us.ibm.com>
> Cc: John Garry <john.garry@huawei.com>
> 
> 
I have my misgivings about this patchset.
To my understanding, only CPUs present in the hctx cpumask are eligible 
to submit I/O to that hctx.
Consequently if all cpus in that mask are offline, where is the point of 
even transmitting a 'connect' request?
Shouldn't we rather modify the tagset to only refer to the current 
online CPUs _only_, thereby never submit a connect request for hctx with 
only offline CPUs?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

WARNING: multiple messages have this Message-ID (diff)
From: Hannes Reinecke <hare@suse.de>
To: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>, Daniel Wagner <dwagner@suse.de>,
	Wen Xiong <wenxiong@us.ibm.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx
Date: Wed, 30 Jun 2021 10:18:37 +0200	[thread overview]
Message-ID: <5f304121-38ce-034b-2d17-93d136c77fe6@suse.de> (raw)
In-Reply-To: <20210629074951.1981284-1-ming.lei@redhat.com>

On 6/29/21 9:49 AM, Ming Lei wrote:
> Hi,
> 
> blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
> io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
> However, all cpus in hctx->cpumask may be offline.
> 
> This usage model isn't well supported by blk-mq which supposes allocator is
> always done on one online CPU in hctx->cpumask. This assumption is
> related with managed irq, which also requires blk-mq to drain inflight
> request in this hctx when the last cpu in hctx->cpumask is going to
> offline.
> 
> However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
> them to ask for request allocation when the specified hctx is inactive
> (all cpus in hctx->cpumask are offline).
> 
> Fix blk_mq_alloc_request_hctx() by adding/passing flag of
> BLK_MQ_F_NOT_USE_MANAGED_IRQ.
> 
> 
> Ming Lei (2):
>    blk-mq: not deactivate hctx if the device doesn't use managed irq
>    nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop
> 
>   block/blk-mq.c             | 6 +++++-
>   drivers/nvme/host/fc.c     | 3 ++-
>   drivers/nvme/host/rdma.c   | 3 ++-
>   drivers/nvme/host/tcp.c    | 3 ++-
>   drivers/nvme/target/loop.c | 3 ++-
>   include/linux/blk-mq.h     | 1 +
>   6 files changed, 14 insertions(+), 5 deletions(-)
> 
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Daniel Wagner <dwagner@suse. thede>
> Cc: Wen Xiong <wenxiong@us.ibm.com>
> Cc: John Garry <john.garry@huawei.com>
> 
> 
I have my misgivings about this patchset.
To my understanding, only CPUs present in the hctx cpumask are eligible 
to submit I/O to that hctx.
Consequently if all cpus in that mask are offline, where is the point of 
even transmitting a 'connect' request?
Shouldn't we rather modify the tagset to only refer to the current 
online CPUs _only_, thereby never submit a connect request for hctx with 
only offline CPUs?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-06-30  8:18 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29  7:49 [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-06-29  7:49 ` Ming Lei
2021-06-29  7:49 ` [PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq Ming Lei
2021-06-29  7:49   ` Ming Lei
2021-06-29 12:39   ` Hannes Reinecke
2021-06-29 12:39     ` Hannes Reinecke
2021-06-29 14:17     ` Ming Lei
2021-06-29 14:17       ` Ming Lei
2021-06-29 15:49   ` John Garry
2021-06-29 15:49     ` John Garry
2021-06-30  0:32     ` Ming Lei
2021-06-30  0:32       ` Ming Lei
2021-06-30  9:25       ` John Garry
2021-06-30  9:25         ` John Garry
2021-07-01  9:52       ` Christoph Hellwig
2021-07-01  9:52         ` Christoph Hellwig
2021-06-29 23:30   ` Damien Le Moal
2021-06-29 23:30     ` Damien Le Moal
2021-06-30 18:58     ` Sagi Grimberg
2021-06-30 18:58       ` Sagi Grimberg
2021-06-30 21:57       ` Damien Le Moal
2021-06-30 21:57         ` Damien Le Moal
2021-07-01 14:20         ` Keith Busch
2021-07-01 14:20           ` Keith Busch
2021-06-29  7:49 ` [PATCH 2/2] nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop Ming Lei
2021-06-29  7:49   ` Ming Lei
2021-06-30  8:15   ` Hannes Reinecke
2021-06-30  8:15     ` Hannes Reinecke
2021-06-30  8:47     ` Ming Lei
2021-06-30  8:47       ` Ming Lei
2021-06-30  8:18 ` Hannes Reinecke [this message]
2021-06-30  8:18   ` [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Hannes Reinecke
2021-06-30  8:42   ` Ming Lei
2021-06-30  8:42     ` Ming Lei
2021-06-30  9:43     ` Hannes Reinecke
2021-06-30  9:43       ` Hannes Reinecke
2021-06-30  9:53       ` Ming Lei
2021-06-30  9:53         ` Ming Lei
2021-06-30 18:59         ` Sagi Grimberg
2021-06-30 18:59           ` Sagi Grimberg
2021-06-30 19:46           ` Hannes Reinecke
2021-06-30 19:46             ` Hannes Reinecke
2021-06-30 23:59             ` Ming Lei
2021-06-30 23:59               ` Ming Lei
2021-07-01  8:00               ` Hannes Reinecke
2021-07-01  8:00                 ` Hannes Reinecke
2021-07-01  9:13                 ` Ming Lei
2021-07-01  9:13                   ` Ming Lei
2021-07-02  9:47             ` Daniel Wagner
2021-07-02  9:47               ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5f304121-38ce-034b-2d17-93d136c77fe6@suse.de \
    --to=hare@suse.de \
    --cc=axboe@kernel.dk \
    --cc=dwagner@suse.de \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=wenxiong@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.