linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Alex Lyakas <alex@zadara.com>
To: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
Cc: Sagi Grimberg <sagi@grimberg.me>,
	"snitzer@redhat.com" <snitzer@redhat.com>,
	"james.smart@broadcom.com" <james.smart@broadcom.com>,
	linux-nvme <linux-nvme@lists.infradead.org>,
	"Victor.Gladkov@kioxia.com" <Victor.Gladkov@kioxia.com>,
	"hare@suse.de" <hare@suse.de>, "hch@lst.de" <hch@lst.de>
Subject: Re: [PATCH V4] nvme-fabrics: reject I/O to offline device
Date: Thu, 2 Apr 2020 20:00:16 +0300	[thread overview]
Message-ID: <CAOcd+r1d7zeP3p+jH_PqaSRTs0p2u3Lt+uom1j8PWTca8csCiA@mail.gmail.com> (raw)
In-Reply-To: <BYAPR04MB4965B8FC2D68B37B5D384F4E86C60@BYAPR04MB4965.namprd04.prod.outlook.com>

Hi Chaitanya, Victor,

I believe the following modification to nvme_failfast_work addresses the issue:

static void nvme_failfast_work(struct work_struct *work)
{
    struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
        struct nvme_ctrl, failfast_work);
    bool run_queues = false;

    spin_lock_irq(&ctrl->lock);
    if (ctrl->state != NVME_CTRL_CONNECTING)
        goto out;

    set_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
    dev_info(ctrl->device, "failfast expired set for controller %s\n",
        ctrl->opts->subsysnqn);
    nvme_kick_requeue_lists(ctrl);
    run_queues = true;
out:
    spin_unlock_irq(&ctrl->lock);

    if (run_queues) {
        struct nvme_ns *ns = NULL;

        down_read(&ctrl->namespaces_rwsem);
        list_for_each_entry(ns, &ctrl->namespaces, list) {
            blk_mq_run_hw_queues(ns->queue, true/*async*/);
        }
        up_read(&ctrl->namespaces_rwsem);
    }
}

Basically, calling blk_mq_run_hw_queues(). I see that in-flight IO
also fails after fast_io_fail_tmo, as expected.

Can you please comment?

Thanks,
Alex.

On Thu, Apr 2, 2020 at 7:48 AM Chaitanya Kulkarni
<Chaitanya.Kulkarni@wdc.com> wrote:
>
> On 3/31/20 11:21 AM, Alex Lyakas wrote:
> > Thank you for addressing this issue. I asked about this scenario in
> > June 2019 in http://lists.infradead.org/pipermail/linux-nvme/2019-June/024766.html.
> >
> > I tested the patch on top of Mellanox OFED 4.7 with kernel 4.14. From
> > my perspective, the direction is very good. But I think the problem is
> > only partially addressed.
> >
> > When a controller enters a CONNECTING state and the fast_io_fail_tmo
> > expires, all new IOs to this controller are failed immediately. This
> > is great!
> >
> > However, in-flight IOs, i.e., those that were issued before the
> > controller got disconnected, are still stuck until the controller
> > succeeds to reconnect, or forever. I believe those IOs need to be
> > errored out as well after fast_io_fail_tmo expires.
> >
> > I did some debugging to try to accomplish that. I thought that the
> > crux is that in-flight IOs are failed, and retried due to non-zero
> > nvme_max_retries parameter. And by this time the request queue is
> > quiesced by blk_mq_quiesce_queue(), and that's why in-flight IOs get
> > stuck. I indeed see that nvme_retry_req() is called for some IOs. But
> > I also see that after that, the request queue is un-quiesced via:
> > nvme_rdma_error_recovery_work() =>
> >     nvme_rdma_teardown_io_queues() => nvme_stop_queues() // this
> > quiesces the queue
> >    nvme_start_queues() // this un-quiesces the queue
> >
> > Can anybody perhaps give a hint on the approach to error out the
> > in-flight IOs? I can modify the patch and test.
> >
> > Thanks,
> > Alex.
>
>
> Thanks for the feedback. Let me see if I can generate the scenario and
> have a test for this.
>
> I'll send out the updated patch soon.
>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-04-02 17:00 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-02  3:30 [PATCH V4] nvme-fabrics: reject I/O to offline device Chaitanya Kulkarni
2020-03-07 12:15 ` Hannes Reinecke
2020-03-10  0:09   ` Chaitanya Kulkarni
2020-03-31 18:21 ` Alex Lyakas
2020-04-02  4:48   ` Chaitanya Kulkarni
2020-04-02 17:00     ` Alex Lyakas [this message]
     [not found]   ` <bb9948035bea461a8864a8cc88513e1b@kioxia.com>
2020-06-29 15:09     ` Victor Gladkov
2020-06-29 19:16       ` Sagi Grimberg
2020-06-30 13:20         ` Victor Gladkov
2020-07-02 14:35       ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOcd+r1d7zeP3p+jH_PqaSRTs0p2u3Lt+uom1j8PWTca8csCiA@mail.gmail.com \
    --to=alex@zadara.com \
    --cc=Chaitanya.Kulkarni@wdc.com \
    --cc=Victor.Gladkov@kioxia.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=james.smart@broadcom.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).