linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
	linux-block@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	John Garry <john.garry@huawei.com>,
	Sagi Grimberg <sagi@grimberg.me>, Wen Xiong <wenxiong@us.ibm.com>,
	James Smart <james.smart@broadcom.com>
Subject: Re: [PATCH V7 3/3] blk-mq: don't deactivate hctx if managed irq isn't used
Date: Thu, 16 Sep 2021 16:13:55 +0800	[thread overview]
Message-ID: <YUL8wz6CM4jrUeeN@T590> (raw)
In-Reply-To: <20210916074229.7ntbn7prnv3fmmm2@carbon.lan>

On Thu, Sep 16, 2021 at 09:42:29AM +0200, Daniel Wagner wrote:
> On Thu, Sep 16, 2021 at 10:17:18AM +0800, Ming Lei wrote:
> > Firstly, even with patches of 'qla2xxx - add nvme map_queues support',
> > the knowledge if managed irq is used in nvmef LLD is still missed, so
> > blk_mq_hctx_use_managed_irq() may always return false, but that
> > shouldn't be hard to solve.
> 
> Yes, that's pretty simple:
> 
> --- a/drivers/scsi/qla2xxx/qla_os.c
> +++ b/drivers/scsi/qla2xxx/qla_os.c
> @@ -7914,6 +7914,9 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost)
>                 rc = blk_mq_map_queues(qmap);
>         else
>                 rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset);
> +
> +       qmap->use_managed_irq = true;
> +
>         return rc;
>  }

blk_mq_alloc_request_hctx() won't be called into qla2xxx queue, what we
need is to mark the nvmef queue as .use_managed_irq if the LLD uses
managed irq.

> 
> > The problem is that we still should make connect io queue completed
> > when all CPUs of this hctx is offline in case of managed irq.
> 
> I agree, though if I understand this right, the scenario where all CPUs
> are offline in a hctx and we want to use this htcx is only happening
> after an initial setup and then reconnect attempt happens. That is
> during the first connect attempt only online CPUs are assigned to the
> hctx. When the CPUs are taken offline the block layer makes sure not to
> use those queues anymore (no problem for the hctx so far). Then for some
> reason the nmve-fc layer decides to reconnect and we end up in the
> situation where we don't have any online CPU in given hctx.

It is simply that blk_mq_alloc_request_hctx() allocates request from one
specified hctx, and the specified hctx can be offline any time.

> 
> > One solution might be to use io polling for connecting io queue, but nvme fc
> > doesn't support polling, all the other nvme hosts do support it.
> 
> No idea, something to explore for sure :)

It is totally a raw idea, something like: start each queue in poll mode,
and run connect IO queue command via polling. Once the connect io queue command
is done, switch the queue into normal mode. Then
blk_mq_alloc_request_hctx() is guaranteed to be successful.

> 
> My point is that your series is fixing existing bugs and doesn't
> introduce a new one. qla2xxx is already depending on managed IRQs. I
> would like to see your series accepted with my hack as stop gap solution
> until we have a proper fix.

I am fine to work this way first if no one objects.


Thanks, 
Ming


  reply	other threads:[~2021-09-16  8:14 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-18 14:44 [PATCH V7 0/3] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-08-18 14:44 ` [PATCH V7 1/3] genirq: add device_has_managed_msi_irq Ming Lei
2021-10-11 18:23   ` Varad Gautam
2021-08-18 14:44 ` [PATCH V7 2/3] blk-mq: mark if one queue map uses managed irq Ming Lei
2021-08-23 17:17   ` Sagi Grimberg
2021-08-18 14:44 ` [PATCH V7 3/3] blk-mq: don't deactivate hctx if managed irq isn't used Ming Lei
2021-08-23 17:18   ` Sagi Grimberg
2021-09-15 16:14   ` Daniel Wagner
2021-09-16  2:17     ` Ming Lei
2021-09-16  7:42       ` Daniel Wagner
2021-09-16  8:13         ` Ming Lei [this message]
2021-10-04 12:25           ` [RFC] nvme-fc: Allow managed IRQs Daniel Wagner
2021-08-19 22:38 ` [PATCH V7 0/3] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YUL8wz6CM4jrUeeN@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dwagner@suse.de \
    --cc=hch@lst.de \
    --cc=james.smart@broadcom.com \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=sagi@grimberg.me \
    --cc=tglx@linutronix.de \
    --cc=wenxiong@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).