linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Hannes Reinecke <hare@suse.de>
Cc: Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Keith Busch <keith.busch@wdc.com>,
	linux-nvme@lists.infradead.org,
	Daniel Wagner <daniel.wagner@suse.de>
Subject: Re: [PATCHv2] nvme-mpath: delete disk after last connection
Date: Wed, 21 Apr 2021 02:16:12 +0900	[thread overview]
Message-ID: <20210420171612.GA18049@redsun51.ssa.fujisawa.hgst.com> (raw)
In-Reply-To: <73f1a5df-7d40-41ce-e6c5-42fd9fd3a2f1@suse.de>

On Tue, Apr 20, 2021 at 07:02:32PM +0200, Hannes Reinecke wrote:
> On 4/20/21 4:39 PM, Christoph Hellwig wrote:
> > On Tue, Apr 20, 2021 at 11:14:36PM +0900, Keith Busch wrote:
> > > On Tue, Apr 20, 2021 at 03:19:10PM +0200, Hannes Reinecke wrote:
> > > > On 4/20/21 10:05 AM, Christoph Hellwig wrote:
> > > > > On Fri, Apr 16, 2021 at 08:24:11AM +0200, Hannes Reinecke wrote:
> > > > > > With the proposed patch, the following messages appear:
> > > > > > 
> > > > > >    [  227.516807] md/raid1:md0: Disk failure on nvme3n1, disabling device.
> > > > > >    [  227.516807] md/raid1:md0: Operation continuing on 1 devices.
> > > > > 
> > > > > So how is this going to work for e.g. a case where the device
> > > > > disappear due to resets or fabrics connection problems?  This now
> > > > > directly teards down the device.
> > > > > 
> > > > Yes, that is correct; the nshead will be removed once the last path is
> > > > _removed_.
> > > 
> > > The end result is also how non-multipath nvme behaves, so I think that's
> > > what users have come to expect.
> > 
> > I'm not sure that is what users expect.  At least the SCSI multipath
> > setups I've worked do not expect it and ensure the queue_if_no_path
> > option is set.
> > 
> Yes, sure. And as I said, I'm happy to implement this option for NVMe, too.
> But that is _not_ what this patch is about.
> 
> NVMe since day one has _removed_ the namespace if the controller goes away
> (ie if you do a PCI hotplug). So customer rightly expects this behaviour to
> continue.
> 
> And this is what the patch does; _aligning_ the behaviour from multipathed
> to non-multipathed controllers when the last path is gone.
> 
> Non-multipathed (ie CMIC==0) controllers will remove the namespace once the
> last _reference_ to that namespace drops (ie the PCI hotplug case).
> Multipathed (ie CMIC!=0) controllers will remove the namespace once the last
> _opener_ goes away.
> The refcount is long gone by that time.
> 
> > > > But key point here is that once the system finds itself in that
> > > > situation it's impossible to recover, as the refcounts are messed.
> > > > Even a manual connect call with the same parameter will _not_ restore
> > > > operation, but rather result in a new namespace.
> > > 
> > > I haven't looked at this yet, but is it really not possible to restore
> > > the original namespace upon the reestablished connection?
> > 
> > It is possible, and in fact is what we do.
> > 
> It is _not_ once the namespace is mounted.
> Or MD has claimed the device.
> And the problem is that the refcount already _is_ zero, so we are already in
> teardown. We're just waiting for the reference to the gendisk to drop.
> Which is never will, as we would have to umount (or detach) the device for
> that, but I/O is still pending which cannot be flushed, so that'll fail.
> And if we try to connect the same namespace again, nvme_find_ns_head() will
> not return the existing ns_head as the refcount is zero.
> Causing a new ns_head to be created.
> 
> If you manage to get this working with the current code please show me from
> the testcase in the description what we should have done differently.

I see what you mean. I think we can detangle this, but it's not as
straight forward as I hoped.

As far as *this* patch goes, I think you and I are aligned on the
behavior, and I still think it's good.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-04-20 17:17 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-16  6:24 [PATCHv2] nvme-mpath: delete disk after last connection Hannes Reinecke
2021-04-20  8:05 ` Christoph Hellwig
     [not found]   ` <91f25f95-e5a0-466f-b410-6f6dafdec0a0@email.android.com>
2021-04-20  9:54     ` Christoph Hellwig
2021-04-20 13:19   ` Hannes Reinecke
2021-04-20 14:14     ` Keith Busch
2021-04-20 14:39       ` Christoph Hellwig
2021-04-20 17:02         ` Hannes Reinecke
2021-04-20 17:16           ` Keith Busch [this message]
2021-04-20 20:05         ` Martin Wilck
2021-04-20 16:21       ` Hannes Reinecke
2021-04-20 14:16 ` Keith Busch
2021-05-01 11:59   ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210420171612.GA18049@redsun51.ssa.fujisawa.hgst.com \
    --to=kbusch@kernel.org \
    --cc=daniel.wagner@suse.de \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=keith.busch@wdc.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).