All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Daniel Wagner <dwagner@suse.de>, Wen Xiong <wenxiong@us.ibm.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq
Date: Tue, 29 Jun 2021 22:17:15 +0800	[thread overview]
Message-ID: <YNsrayg+pbVO+J7I@T590> (raw)
In-Reply-To: <1a14a397-6244-928e-5aaa-85c2ccbe0e40@suse.de>

On Tue, Jun 29, 2021 at 02:39:14PM +0200, Hannes Reinecke wrote:
> On 6/29/21 9:49 AM, Ming Lei wrote:
> > hctx is deactivated when all CPU in hctx->cpumask become offline by
> > draining all requests originated from this hctx and moving new
> > allocation to active hctx. This way is for avoiding inflight IO when
> > the managed irq is shutdown.
> > 
> > Some drivers(nvme fc, rdma, tcp, loop) doesn't use managed irq, so
> > they needn't to deactivate hctx. Also, they are the only user of
> > blk_mq_alloc_request_hctx() which is used for connecting io queue.
> > And their requirement is that the connect request can be submitted
> > via one specified hctx on which all CPU in its hctx->cpumask may have
> > become offline.
> > 
> 
> How can you submit a connect request for a hctx on which all CPUs are
> offline? That hctx will be unusable as it'll never be able to receive
> interrupts ...

I believe BLK_MQ_F_NOT_USE_MANAGED_IRQ is self-explanatory. And the
interrupt(non-managed) of this hctx will be migrated to online CPUs,
see migrate_one_irq().

For managed irq, we have to prevent new allocation if all CPUs of this
hctx is offline because genirq will shutdown the interrupt.

> 
> > Address the requirement for nvme fc/rdma/loop, so the reported kernel
> > panic on the following line in blk_mq_alloc_request_hctx() can be fixed.
> > 
> > 	data.ctx = __blk_mq_get_ctx(q, cpu)
> > 
> > Cc: Sagi Grimberg <sagi@grimberg.me>
> > Cc: Daniel Wagner <dwagner@suse.de>
> > Cc: Wen Xiong <wenxiong@us.ibm.com>
> > Cc: John Garry <john.garry@huawei.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >   block/blk-mq.c         | 6 +++++-
> >   include/linux/blk-mq.h | 1 +
> >   2 files changed, 6 insertions(+), 1 deletion(-)
> > 
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index df5dc3b756f5..74632f50d969 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -494,7 +494,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
> >   	data.hctx = q->queue_hw_ctx[hctx_idx];
> >   	if (!blk_mq_hw_queue_mapped(data.hctx))
> >   		goto out_queue_exit;
> > -	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
> > +	cpu = cpumask_first(data.hctx->cpumask);
> >   	data.ctx = __blk_mq_get_ctx(q, cpu);
> 
> I don't get it.
> Doesn't this allow us to allocate a request on a dead cpu, ie the very thing
> we try to prevent?

It is fine to allocate & dispatch one request to the hctx when all CPU on
its cpumask are offline if this hctx's interrupt isn't managed.


Thanks,
Ming


WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Daniel Wagner <dwagner@suse.de>, Wen Xiong <wenxiong@us.ibm.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: [PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq
Date: Tue, 29 Jun 2021 22:17:15 +0800	[thread overview]
Message-ID: <YNsrayg+pbVO+J7I@T590> (raw)
In-Reply-To: <1a14a397-6244-928e-5aaa-85c2ccbe0e40@suse.de>

On Tue, Jun 29, 2021 at 02:39:14PM +0200, Hannes Reinecke wrote:
> On 6/29/21 9:49 AM, Ming Lei wrote:
> > hctx is deactivated when all CPU in hctx->cpumask become offline by
> > draining all requests originated from this hctx and moving new
> > allocation to active hctx. This way is for avoiding inflight IO when
> > the managed irq is shutdown.
> > 
> > Some drivers(nvme fc, rdma, tcp, loop) doesn't use managed irq, so
> > they needn't to deactivate hctx. Also, they are the only user of
> > blk_mq_alloc_request_hctx() which is used for connecting io queue.
> > And their requirement is that the connect request can be submitted
> > via one specified hctx on which all CPU in its hctx->cpumask may have
> > become offline.
> > 
> 
> How can you submit a connect request for a hctx on which all CPUs are
> offline? That hctx will be unusable as it'll never be able to receive
> interrupts ...

I believe BLK_MQ_F_NOT_USE_MANAGED_IRQ is self-explanatory. And the
interrupt(non-managed) of this hctx will be migrated to online CPUs,
see migrate_one_irq().

For managed irq, we have to prevent new allocation if all CPUs of this
hctx is offline because genirq will shutdown the interrupt.

> 
> > Address the requirement for nvme fc/rdma/loop, so the reported kernel
> > panic on the following line in blk_mq_alloc_request_hctx() can be fixed.
> > 
> > 	data.ctx = __blk_mq_get_ctx(q, cpu)
> > 
> > Cc: Sagi Grimberg <sagi@grimberg.me>
> > Cc: Daniel Wagner <dwagner@suse.de>
> > Cc: Wen Xiong <wenxiong@us.ibm.com>
> > Cc: John Garry <john.garry@huawei.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >   block/blk-mq.c         | 6 +++++-
> >   include/linux/blk-mq.h | 1 +
> >   2 files changed, 6 insertions(+), 1 deletion(-)
> > 
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index df5dc3b756f5..74632f50d969 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -494,7 +494,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
> >   	data.hctx = q->queue_hw_ctx[hctx_idx];
> >   	if (!blk_mq_hw_queue_mapped(data.hctx))
> >   		goto out_queue_exit;
> > -	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
> > +	cpu = cpumask_first(data.hctx->cpumask);
> >   	data.ctx = __blk_mq_get_ctx(q, cpu);
> 
> I don't get it.
> Doesn't this allow us to allocate a request on a dead cpu, ie the very thing
> we try to prevent?

It is fine to allocate & dispatch one request to the hctx when all CPU on
its cpumask are offline if this hctx's interrupt isn't managed.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-06-29 14:17 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29  7:49 [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-06-29  7:49 ` Ming Lei
2021-06-29  7:49 ` [PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq Ming Lei
2021-06-29  7:49   ` Ming Lei
2021-06-29 12:39   ` Hannes Reinecke
2021-06-29 12:39     ` Hannes Reinecke
2021-06-29 14:17     ` Ming Lei [this message]
2021-06-29 14:17       ` Ming Lei
2021-06-29 15:49   ` John Garry
2021-06-29 15:49     ` John Garry
2021-06-30  0:32     ` Ming Lei
2021-06-30  0:32       ` Ming Lei
2021-06-30  9:25       ` John Garry
2021-06-30  9:25         ` John Garry
2021-07-01  9:52       ` Christoph Hellwig
2021-07-01  9:52         ` Christoph Hellwig
2021-06-29 23:30   ` Damien Le Moal
2021-06-29 23:30     ` Damien Le Moal
2021-06-30 18:58     ` Sagi Grimberg
2021-06-30 18:58       ` Sagi Grimberg
2021-06-30 21:57       ` Damien Le Moal
2021-06-30 21:57         ` Damien Le Moal
2021-07-01 14:20         ` Keith Busch
2021-07-01 14:20           ` Keith Busch
2021-06-29  7:49 ` [PATCH 2/2] nvme: pass BLK_MQ_F_NOT_USE_MANAGED_IRQ for fc/rdma/tcp/loop Ming Lei
2021-06-29  7:49   ` Ming Lei
2021-06-30  8:15   ` Hannes Reinecke
2021-06-30  8:15     ` Hannes Reinecke
2021-06-30  8:47     ` Ming Lei
2021-06-30  8:47       ` Ming Lei
2021-06-30  8:18 ` [PATCH 0/2] blk-mq: fix blk_mq_alloc_request_hctx Hannes Reinecke
2021-06-30  8:18   ` Hannes Reinecke
2021-06-30  8:42   ` Ming Lei
2021-06-30  8:42     ` Ming Lei
2021-06-30  9:43     ` Hannes Reinecke
2021-06-30  9:43       ` Hannes Reinecke
2021-06-30  9:53       ` Ming Lei
2021-06-30  9:53         ` Ming Lei
2021-06-30 18:59         ` Sagi Grimberg
2021-06-30 18:59           ` Sagi Grimberg
2021-06-30 19:46           ` Hannes Reinecke
2021-06-30 19:46             ` Hannes Reinecke
2021-06-30 23:59             ` Ming Lei
2021-06-30 23:59               ` Ming Lei
2021-07-01  8:00               ` Hannes Reinecke
2021-07-01  8:00                 ` Hannes Reinecke
2021-07-01  9:13                 ` Ming Lei
2021-07-01  9:13                   ` Ming Lei
2021-07-02  9:47             ` Daniel Wagner
2021-07-02  9:47               ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YNsrayg+pbVO+J7I@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=wenxiong@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.