From: Hannes Reinecke <hare@suse.de>
To: Keith Busch <kbusch@kernel.org>
Cc: linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
Keith Busch <keith.busch@wdc.com>,
Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH 2/2] nvme: delete disk when last path is gone
Date: Thu, 25 Feb 2021 18:37:41 +0100 [thread overview]
Message-ID: <176fe48f-090f-0365-ef8f-d650b803f053@suse.de> (raw)
In-Reply-To: <20210225165944.GF31593@redsun51.ssa.fujisawa.hgst.com>
On 2/25/21 5:59 PM, Keith Busch wrote:
> On Thu, Feb 25, 2021 at 12:05:34PM +0100, Hannes Reinecke wrote:
>> The multipath code currently deletes the disk only after all references
>> to it are dropped rather than when the last path to that disk is lost.
>> This differs from the behaviour in the non-multipathed case where the
>> disk is deleted once the controller is removed.
>> This has been reported to cause problems with some use cases like MD RAID.
>>
>> This patch implements an alternative behaviour of deleting the disk when
>> the last path is gone, ie the same behaviour as non-multipathed nvme
>> devices. The alternative behaviour can be enabled with the new sysfs
>> attribute 'no_path_detach'.
>
> This looks ok to me. I have heard from a few people that they expected
> it to work this way with the option enabled, but I suppose we do need to
> retain the old behavior as default.
>
> Reviewed-by: Keith Busch <kbusch@kernel.org>
>
Oh, I would _love_ to kill the old behaviour.
Especially as we now have fast_io_fail_tmo and ctrl_loss_tmo, which
gives us enough control on how the controller and the remaining paths
should behave (and which weren't present when fabrics got implemented).
We can make this behaviour the default, and kill the old approach next
year if there are no complaints :-)
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-02-25 17:37 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-25 11:05 [PATCHv2 0/2] nvme: fixup MD RAID usage Hannes Reinecke
2021-02-25 11:05 ` [PATCH 1/2] nvme: add 'fast_io_fail_tmo' controller sysfs attribute Hannes Reinecke
2021-03-05 21:10 ` Sagi Grimberg
2021-02-25 11:05 ` [PATCH 2/2] nvme: delete disk when last path is gone Hannes Reinecke
2021-02-25 16:59 ` Keith Busch
2021-02-25 17:37 ` Hannes Reinecke [this message]
2021-03-05 21:12 ` Sagi Grimberg
2021-03-03 1:01 ` [PATCHv2 0/2] nvme: fixup MD RAID usage Minwoo Im
-- strict thread matches above, loose matches on Subject: below --
2021-02-23 11:59 [PATCH 0/2] nvme: fix regression with MD RAID Hannes Reinecke
2021-02-23 11:59 ` [PATCH 2/2] nvme: delete disk when last path is gone Hannes Reinecke
2021-02-23 12:56 ` Minwoo Im
2021-02-23 14:07 ` Hannes Reinecke
2021-02-24 22:40 ` Sagi Grimberg
2021-02-25 8:37 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=176fe48f-090f-0365-ef8f-d650b803f053@suse.de \
--to=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=keith.busch@wdc.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).