linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: James Smart <james.smart@broadcom.com>
To: Victor Gladkov <Victor.Gladkov@kioxia.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [PATCH] nvme-fabrics: reject I/O to offline device
Date: Thu, 5 Dec 2019 16:38:27 -0800	[thread overview]
Message-ID: <4963e813-0d99-4890-804a-cd4c9c660607@broadcom.com> (raw)
In-Reply-To: <ac9b5d7192fb49ac9bdf19dd35be0ab2@kioxia.com>



On 12/4/2019 12:28 AM, Victor Gladkov wrote:
> On 12/03/2019 06:19 PM, James Smart wrote:
>> On 12/3/2019 2:04 AM, Victor Gladkov wrote:
>>> On 12/03/2019 00:47 AM, James Smart wrote:
>>>> O
>>>> The controller-loss-timeout should not affect IO timeout policy, these are
>> two different policies.
>> Ok - which says what does make sense to add is the portion:
>>
>>     !(ctrl->state == NVME_CTRL_CONNECTING && ((ktime_get_ns() - rq->start_time_ns) > jiffies_to_nsecs(rq->timeout)))
>>
>>
>> But I don't think we need the failfast flag.
>>
>> -- james
> OK. I think, it's good enough.
>
> This is updated patch:
>
> ---
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 74b8818..b58abc1 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -549,6 +549,8 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
>   {
>          if (ctrl->state != NVME_CTRL_DELETING &&
>              ctrl->state != NVME_CTRL_DEAD &&
> +           !(ctrl->state == NVME_CTRL_CONNECTING &&
> +            ((ktime_get_ns() - rq->start_time_ns) > jiffies_to_nsecs(rq->timeout))) &&
>              !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
>                  return BLK_STS_RESOURCE;
>

Did you test this to ensure it's doing what you expect. I'm not sure 
that all the timers are set right at this point. Most I/O's timeout from 
a deadline time stamped at blk_mq_start_request(). But that routine is 
actually called by the transports post the 
nvmf_check_ready/fail_nonready calls.  E.g. the io is not yet in flight, 
thus queued, and the blk-mq internal queuing doesn't count against the 
io timeout.  I can't see anything that guarantees start_time_ns is set.

-- james


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-12-06  0:38 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-01  7:59 [PATCH] nvme-fabrics: reject I/O to offline device Victor Gladkov
2019-12-02 22:26 ` Chaitanya Kulkarni
2019-12-02 22:47 ` James Smart
2019-12-03 10:04   ` Victor Gladkov
2019-12-03 16:19     ` James Smart
2019-12-04  8:28       ` Victor Gladkov
2019-12-06  0:38         ` James Smart [this message]
2019-12-06 22:18           ` Sagi Grimberg
2019-12-08 12:31             ` Hannes Reinecke
2019-12-09 15:30               ` Victor Gladkov
2019-12-17 18:03                 ` James Smart
2019-12-17 21:46                 ` Sagi Grimberg
2019-12-18 22:20                   ` James Smart
2019-12-15 12:33               ` Victor Gladkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4963e813-0d99-4890-804a-cd4c9c660607@broadcom.com \
    --to=james.smart@broadcom.com \
    --cc=Victor.Gladkov@kioxia.com \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).