linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: James Smart <james.smart@broadcom.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	Christoph Hellwig <hch@infradead.org>,
	Victor Gladkov <Victor.Gladkov@kioxia.com>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Mike Snitzer <snitzer@redhat.com>, Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH v3] nvme-fabrics: reject I/O to offline device
Date: Thu, 20 Feb 2020 09:41:13 -0800	[thread overview]
Message-ID: <f1d99912-d177-85ce-7ebd-4863cdcb2a36@broadcom.com> (raw)
In-Reply-To: <7ac74c23-db96-56e0-ad6e-24bb4df1934b@grimberg.me>

On 2/20/2020 12:34 AM, Sagi Grimberg wrote:
>
>>> +static void nvme_failfast_work(struct work_struct *work)
>>> +{
>>> +    struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
>>> +            struct nvme_ctrl, failfast_work);
>>> +
>>> +    spin_lock_irq(&ctrl->lock);
>>> +    if (ctrl->state == NVME_CTRL_CONNECTING) {
>>> +        set_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
>>> +        dev_info(ctrl->device, "failfast expired set for controller 
>>> %s\n", ctrl->opts->subsysnqn);
>>
>> Please break up the line.
>>
>> But looking at the use of NVME_CTRL_FAILFAST_EXPIRED, it almost seems
>> like this is another controller state?
>
> It actually is a controller state. this is just adding another state
> without really fitting it into the state machine. I'd personally
> would want that to happen, but I know James had some rejects on
> this.

I don't believe its a controller state - rather an attribute, relative 
to within the Connecting state (which may be in the process of 
retrying), where instead of busying any io request that may be received, 
we start to fail them.    I don't see this time window as something the 
controller has to actually transition through - meaning we aren't 
reconnecting while in this state.


>
> Also, I still say that its default changes the existing behavior which
> is something we want to avoid.

I thought the last patch was the same as existing behavior (e.g. default 
tmo is 0, and if 0, the timer, which sets the now-fail flag, never gets 
scheduled).

in the last review, I was exploring the option if we did want to change 
the default, as the default is a rather long wait (I did vacilate).  But 
I'm fine with keeping things the same.

-- james


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-02-20 17:41 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-04 15:49 [PATCH v3] nvme-fabrics: reject I/O to offline device Victor Gladkov
2020-02-19 15:28 ` Christoph Hellwig
2020-02-20  8:34   ` Sagi Grimberg
2020-02-20 17:41     ` James Smart [this message]
2020-02-26  8:52       ` Victor Gladkov
2020-03-02  3:30         ` Chaitanya Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f1d99912-d177-85ce-7ebd-4863cdcb2a36@broadcom.com \
    --to=james.smart@broadcom.com \
    --cc=Victor.Gladkov@kioxia.com \
    --cc=hare@suse.de \
    --cc=hch@infradead.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).