All of lore.kernel.org
 help / color / mirror / Atom feed
From: Howell, Seth <seth.howell at intel.com>
To: spdk@lists.01.org
Subject: [SPDK] Re: NVMe hotplug for RDMA and TCP transports
Date: Tue, 26 May 2020 20:35:44 +0000	[thread overview]
Message-ID: <MN2PR11MB4256D342B876BD22FA18EE9AFEB00@MN2PR11MB4256.namprd11.prod.outlook.com> (raw)
In-Reply-To: CANvN+enRsZuBP0mt5DFgL9-ibKYTwYpufVbS62bH1czW3nW6HA@mail.gmail.com

[-- Attachment #1: Type: text/plain, Size: 2888 bytes --]

Hi Andrey,

Typically when we refer to hotplug (removal) in fabrics transports, we are talking about the target side of the connection suddenly disconnecting the admin and I/O qpairs by the target side of the connection. This definition of hotplug is already supported in the NVMe initiator. If your definition of hotplug is something different, please correct me so that I can better answer your question.

In RDMA for example, when we receive a disconnect event on the admin qpair for a given controller, we mark that controller as failed and fail up all I/O corresponding to I/O qpairs on that controller. Then subsequent calls to either submit I/O or process completions on any qpair associated with that controller return -ENXIO indicating to the initiator application that the drive has been failed by the target side.
There are a couple of reasons that could happen:
1. The actual drive itself has been hotplugged from the target application (i.e. nvme pcie hotplug on the target side)
2. There wsa some network event that caused the target application to disconnect (NIC failure, RDMA error, etc)

Because there are multiple reasons we could receive a "hotplug" event from the target application we leave it up to the initator application to decide what they want to do with this. Either destroy the controller from the initiator side, try reconnecting to the controller from the same TRID or attempting to connect to the controller from a different TRID (something like target side port failover).

In terms of hotplug insertion, I assume that would mean you want the initiator to automatically connect to a target subsystem that can be presented at any point in time during the running of the application. There isn't a specific driver level implementation of this feature for fabrics controllers, I think mostly because it would be very easy to implement and customize this functionality at the application layer. For example, one could periodically call discover on the targets they want to connect to and when new controllers/subsystems appear, connect to them at that time.

I hope that this answers your question. Please let me know if I am talking about a different definition of hotplug than the one you are using.

Thanks,

Seth



-----Original Message-----
From: Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com> 
Sent: Friday, May 22, 2020 1:47 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] NVMe hotplug for RDMA and TCP transports

Hi team,

is NVMe hotplug functionality as implemented limited to PCIe transport or does it also work for other transports? If it's currently PCIe only, are there any plans to extend the support to RDMA/TCP?

Thanks,
Andrey
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

             reply	other threads:[~2020-05-26 20:35 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-26 20:35 Howell, Seth [this message]
  -- strict thread matches above, loose matches on Subject: below --
2020-07-13 16:29 [SPDK] Re: NVMe hotplug for RDMA and TCP transports Andrey Kuzmin
2020-07-08 19:59 Howell, Seth
2020-05-28 18:41 Andrey Kuzmin
2020-05-27 22:11 Howell, Seth
2020-05-26 21:10 Andrey Kuzmin
2020-05-22 10:34 Revan biradar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MN2PR11MB4256D342B876BD22FA18EE9AFEB00@MN2PR11MB4256.namprd11.prod.outlook.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.