linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>, Keith Busch <kbusch@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Keith Busch <keith.busch@wdc.com>,
	linux-nvme@lists.infradead.org
Subject: Re: [PATCH 2/2] nvme: delete disk when last path is gone
Date: Fri, 5 Mar 2021 13:12:30 -0800	[thread overview]
Message-ID: <96afcec1-92c5-6837-5853-9b180e881375@grimberg.me> (raw)
In-Reply-To: <176fe48f-090f-0365-ef8f-d650b803f053@suse.de>


>>> The multipath code currently deletes the disk only after all references
>>> to it are dropped rather than when the last path to that disk is lost.
>>> This differs from the behaviour in the non-multipathed case where the
>>> disk is deleted once the controller is removed.
>>> This has been reported to cause problems with some use cases like MD 
>>> RAID.
>>>
>>> This patch implements an alternative behaviour of deleting the disk when
>>> the last path is gone, ie the same behaviour as non-multipathed nvme
>>> devices. The alternative behaviour can be enabled with the new sysfs
>>> attribute 'no_path_detach'.
>>
>> This looks ok to me. I have heard from a few people that they expected
>> it to work this way with the option enabled, but I suppose we do need to
>> retain the old behavior as default.
>>
>> Reviewed-by: Keith Busch <kbusch@kernel.org>
>>
> Oh, I would _love_ to kill the old behaviour.
> Especially as we now have fast_io_fail_tmo and ctrl_loss_tmo, which 
> gives us enough control on how the controller and the remaining paths 
> should behave (and which weren't present when fabrics got implemented).
> 
> We can make this behaviour the default, and kill the old approach next 
> year if there are no complaints :-)

If this is broken, and this behavior doesn't break anything else then I
don't see why we shouldn't do that.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-03-05 21:13 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-25 11:05 [PATCHv2 0/2] nvme: fixup MD RAID usage Hannes Reinecke
2021-02-25 11:05 ` [PATCH 1/2] nvme: add 'fast_io_fail_tmo' controller sysfs attribute Hannes Reinecke
2021-03-05 21:10   ` Sagi Grimberg
2021-02-25 11:05 ` [PATCH 2/2] nvme: delete disk when last path is gone Hannes Reinecke
2021-02-25 16:59   ` Keith Busch
2021-02-25 17:37     ` Hannes Reinecke
2021-03-05 21:12       ` Sagi Grimberg [this message]
2021-03-03  1:01 ` [PATCHv2 0/2] nvme: fixup MD RAID usage Minwoo Im
  -- strict thread matches above, loose matches on Subject: below --
2021-02-23 11:59 [PATCH 0/2] nvme: fix regression with MD RAID Hannes Reinecke
2021-02-23 11:59 ` [PATCH 2/2] nvme: delete disk when last path is gone Hannes Reinecke
2021-02-23 12:56   ` Minwoo Im
2021-02-23 14:07     ` Hannes Reinecke
2021-02-24 22:40   ` Sagi Grimberg
2021-02-25  8:37     ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=96afcec1-92c5-6837-5853-9b180e881375@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=keith.busch@wdc.com \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).