All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvme_fc auto-connect scripts
@ 2017-10-09 20:10 James Smart
  2017-10-09 20:13 ` James Smart
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: James Smart @ 2017-10-09 20:10 UTC (permalink / raw)


Sagi recently posted a "one-shot" call to nvme-connect via a systemd
script. FC has no problems operating with it assuming all the required
connect fields are present (host_traddr required).

However FC is a bit more dynamic, and we've built up scripts that can
be triggered whenever a new FC remote port with a discovery subsytem is
detected. Whenever such a port is detected (by registration of the
remoteport with the nvme_fc transport by the lldd), the transport
generates a udev event.  A udev rule is created that parses the event
information into the FC remote port (traddr) that supports a discovery
subsystem and the FC host port that is connected to the FC remote port
(host_traddr). The rule then invokes an instance-based systemd service
to perform a "nvme connect-all" with the traddr and host-traddr info.

To resolve some issues seen by discovery of remote ports before the udev
service was running and where the events were not replayed, there is an
additional systemd service that runs once and invokes a script which
locates the FC host ports on the system, looks for any connected FC
remote ports with discovery subsystems, and invokes a connect-all for
them.  The FC port information is specific to lpfc attributes currently,
but when the shared transport is in place, they will be common.

I don't claim to be a systemd expert, so comments are certainly welcome.

There's a spec file for an rpm, but as a general notice, the installation
is as follows:

(as root)
cp 70-nvmefc-autoconnect.conf        /usr/lib/dracut/dracut.conf.d/
cp 70-nvmefc-autoconnect.rules       /usr/lib/udev/rules.d/
cp nvmefc-connect at .service           /usr/lib/systemd/system/
cp nvmefc-boot-connections.service   /usr/lib/systemd/system/
cp nvmefc_boot_connections.sh        /usr/sbin/lpfc/

chmod 644 /usr/lib/dracut/dracut.conf.d/70-nvmefc-autoconnect.conf
chmod 644 /usr/lib/udev/rules.d/70-nvmefc-autoconnect.rules
chmod 644 /usr/lib/systemd/system/nvmefc-connect at .service
chmod 644 /usr/lib/systemd/system/nvmefc-boot-connections.service
chmod 544 /usr/sbin/lpfc/nvmefc_boot_connections.sh

systemctl -q enable nvmefc-boot-connections
systemctl -q daemon-reload
udevadm control --reload-rules
udevadm trigger --action=change --subsystem-match=fc
dracut -f

The nvmefc_boot_connections.sh script can be put somewhere other than
/usr/sbin/lpfc. If moved, the nvmefc-boot-connections.service file must
be revised with the proper pathname.

Signed-off-by: James Smart <james.smart at broadcom.com>
---
 nvme-fc-autoconnect/70-nvmefc-autoconnect.conf     |   1 +
 nvme-fc-autoconnect/70-nvmefc-autoconnect.rules    |   6 +
 .../nvmefc-boot-connections.service                |   9 ++
 nvme-fc-autoconnect/nvmefc-connect.spec            | 138 +++++++++++++++++++++
 nvme-fc-autoconnect/nvmefc-connect at .service        |  14 +++
 nvme-fc-autoconnect/nvmefc_boot_connections.sh     |  25 ++++
 6 files changed, 193 insertions(+)
 create mode 100644 nvme-fc-autoconnect/70-nvmefc-autoconnect.conf
 create mode 100644 nvme-fc-autoconnect/70-nvmefc-autoconnect.rules
 create mode 100644 nvme-fc-autoconnect/nvmefc-boot-connections.service
 create mode 100644 nvme-fc-autoconnect/nvmefc-connect.spec
 create mode 100644 nvme-fc-autoconnect/nvmefc-connect at .service
 create mode 100644 nvme-fc-autoconnect/nvmefc_boot_connections.sh

diff --git a/nvme-fc-autoconnect/70-nvmefc-autoconnect.conf b/nvme-fc-autoconnect/70-nvmefc-autoconnect.conf
new file mode 100644
index 000000000000..b92d94fda336
--- /dev/null
+++ b/nvme-fc-autoconnect/70-nvmefc-autoconnect.conf
@@ -0,0 +1 @@
+install_items+="/usr/lib/udev/rules.d/70-nvmefc-autoconnect.rules"
diff --git a/nvme-fc-autoconnect/70-nvmefc-autoconnect.rules b/nvme-fc-autoconnect/70-nvmefc-autoconnect.rules
new file mode 100644
index 000000000000..a84ea61fca5f
--- /dev/null
+++ b/nvme-fc-autoconnect/70-nvmefc-autoconnect.rules
@@ -0,0 +1,6 @@
+#
+# nvme_fc: udev event to automatically scan (via discovery controller)
+#   new FC nvme remote ports and auto-connect to the subsystems they report.
+#
+
+ACTION=="change", SUBSYSTEM=="fc", ENV{FC_EVENT}=="nvmediscovery", ENV{NVMEFC_HOST_TRADDR}=="*",  ENV{NVMEFC_TRADDR}=="*", RUN+="/usr/bin/systemctl --no-block start nvmefc-connect at --host-traddr=$env{NVMEFC_HOST_TRADDR}\\x20--traddr=$env{NVMEFC_TRADDR}.service"
diff --git a/nvme-fc-autoconnect/nvmefc-boot-connections.service b/nvme-fc-autoconnect/nvmefc-boot-connections.service
new file mode 100644
index 000000000000..6b0a7585a43e
--- /dev/null
+++ b/nvme-fc-autoconnect/nvmefc-boot-connections.service
@@ -0,0 +1,9 @@
+[Unit]
+Description=Auto-connect to subsystems on FC-NVME devices during boot
+
+[Service]
+Type=oneshot
+ExecStart=/bin/sh -c "/usr/sbin/lpfc/nvmefc_boot_connections.sh"
+
+[Install]
+WantedBy=default.target
diff --git a/nvme-fc-autoconnect/nvmefc-connect.spec b/nvme-fc-autoconnect/nvmefc-connect.spec
new file mode 100644
index 000000000000..8497128096a9
--- /dev/null
+++ b/nvme-fc-autoconnect/nvmefc-connect.spec
@@ -0,0 +1,138 @@
+#/*----------------------------------------------------------------------------
+#
+# Broadcom Confidential. Copyright 2017 Broadcom. All Rights Reserved.
+# The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries.
+# Unpublished work. Copying, access, use, or distribution requires an
+# applicable license approved by Broadcom.
+#
+#----------------------------------------------------------------------------*/
+
+%define _sandbox        %(dirname %{_topdir})
+%define _version_str    %{_versionstr}
+
+
+Name: nvmefc-connect
+Vendor: Broadcom Ltd or its subsidiaries
+Version: %{_version_str}
+Release: 1
+Summary: NVMe FC Auto Connect
+License: Copyright 2017 Broadcom. All Rights Reserved. The term "Broadcom" refers to Broadcom Limited and/or its subsidiaries.
+URL: www.broadcom.com
+Group: Applications/System
+ExclusiveOS: linux
+Requires: dracut, nvme-cli, systemd, udev
+AutoReqProv: yes
+Source: nvmefc-connect-%{version}-linux.tgz
+BuildRoot: %{_sandbox}/tmp/nvmefc-connect
+
+#During an installation
+#%pre:
+#   Runs before the package is installed
+#%post:
+#   Runs after the package is installed
+
+#During an uninstallation
+#%preun
+#   Runs before the package is uninstalled
+#%postun
+#   Runs after the package is uninstalled
+
+#During an Upgrade
+#%pre of new package
+#   Install new files
+#%post of new package
+#
+#%preun of old package
+#   Delete any old files not overwritten by newer ones
+#%postun of old package
+
+%description
+NVMe FC Auto Connect
+
+
+%prep
+
+
+
+
+%setup
+
+
+
+
+%pre
+# During an upgrade, remove the symbolic links
+if [ "$1" = "2" ]; then
+    systemctl -q disable nvmefc-boot-connections
+    systemctl -q daemon-reload
+fi
+
+
+
+
+%install
+rm -rf %{buildroot}/usr/lib/dracut/dracut.conf.d/
+rm -rf %{buildroot}/usr/lib/systemd/system/
+rm -rf %{buildroot}/usr/lib/udev/rules.d/
+rm -rf %{buildroot}/usr/sbin/lpfc/
+
+mkdir -p %{buildroot}/usr/lib/dracut/dracut.conf.d/
+mkdir -p %{buildroot}/usr/lib/systemd/system/
+mkdir -p %{buildroot}/usr/lib/udev/rules.d/
+mkdir -p %{buildroot}/usr/sbin/lpfc/
+
+cp ./usr/lib/dracut/dracut.conf.d/70-nvmefc-autoconnect.conf  %{buildroot}/usr/lib/dracut/dracut.conf.d/
+cp ./usr/lib/udev/rules.d/70-nvmefc-autoconnect.rules         %{buildroot}/usr/lib/udev/rules.d/
+cp ./usr/lib/systemd/system/nvmefc-connect at .service           %{buildroot}/usr/lib/systemd/system/
+cp ./usr/lib/systemd/system/nvmefc-boot-connections.service   %{buildroot}/usr/lib/systemd/system/
+cp ./usr/sbin/lpfc/nvmefc_boot_connections.sh                 %{buildroot}/usr/sbin/lpfc/
+
+chmod 644 %{buildroot}/usr/lib/dracut/dracut.conf.d/70-nvmefc-autoconnect.conf
+chmod 644 %{buildroot}/usr/lib/udev/rules.d/70-nvmefc-autoconnect.rules
+chmod 644 %{buildroot}/usr/lib/systemd/system/nvmefc-connect at .service
+chmod 644 %{buildroot}/usr/lib/systemd/system/nvmefc-boot-connections.service
+chmod 544 %{buildroot}/usr/sbin/lpfc/nvmefc_boot_connections.sh
+
+
+
+
+%post
+systemctl -q enable nvmefc-boot-connections
+systemctl -q daemon-reload
+udevadm control --reload-rules
+udevadm trigger --action=change --subsystem-match=fc
+dracut -f
+
+
+
+
+%preun
+# Perform the following only if this an uninstallation.
+# i.e. "rpm -ev"
+if [ "$1" = "0" ]; then
+    systemctl -q disable nvmefc-boot-connections
+    systemctl -q daemon-reload
+fi
+
+
+
+
+%postun
+systemctl -q daemon-reload
+udevadm control --reload-rules
+# Perform the following only if this an uninstallation.
+# i.e. "rpm -ev"
+if [ "$1" = "0" ]; then
+    dracut -f
+fi
+
+
+
+
+%files
+/usr/lib/dracut/dracut.conf.d/70-nvmefc-autoconnect.conf
+/usr/lib/udev/rules.d/70-nvmefc-autoconnect.rules
+/usr/lib/systemd/system/nvmefc-connect at .service
+/usr/lib/systemd/system/nvmefc-boot-connections.service
+/usr/sbin/lpfc/nvmefc_boot_connections.sh
+
diff --git a/nvme-fc-autoconnect/nvmefc-connect at .service b/nvme-fc-autoconnect/nvmefc-connect at .service
new file mode 100644
index 000000000000..c2370d63dab1
--- /dev/null
+++ b/nvme-fc-autoconnect/nvmefc-connect at .service
@@ -0,0 +1,14 @@
+#
+# Unit file used by 70-nvmefc-autoconnect.rules.
+#
+
+[Unit]
+Description=Auto-connect to subsystems on FC-NVME devices
+After=syslog.target
+
+[Service]
+Type=oneshot
+ExecStart=/bin/sh -c "/usr/sbin/nvme connect-all --transport=fc `/usr/bin/echo -e '%i'`"
+
+[Install]
+WantedBy=default.target
diff --git a/nvme-fc-autoconnect/nvmefc_boot_connections.sh b/nvme-fc-autoconnect/nvmefc_boot_connections.sh
new file mode 100644
index 000000000000..b1c1e67cdd8b
--- /dev/null
+++ b/nvme-fc-autoconnect/nvmefc_boot_connections.sh
@@ -0,0 +1,25 @@
+#!/bin/sh
+
+connect_discovery_servers_on_host()
+{
+  INFOFILE=$1/nvme_info
+  HOSTNM=`basename $1`
+  if [ -f ${INFOFILE} ] ; then
+    WWNN=`cat /sys/class/fc_host/${HOSTNM}/node_name`
+    WWPN=`cat /sys/class/fc_host/${HOSTNM}/port_name`
+    HOST_TRADDR="nn-${WWNN}:pn-${WWPN}"
+    cat ${INFOFILE} | awk -v haddr=${HOST_TRADDR} '/^NVME RPORT.*DISCSRVC/{print "/usr/sbin/nvme connect-all --transport=fc --host-traddr=" haddr " --traddr=nn-0" $6 ":pn-0" $4 }' | bash
+  fi
+}
+
+# need a real check for root permissions here
+ROOT=1
+
+if [ $ROOT -ne 1 ] ; then
+  echo "$0: Must be root to execute"
+  exit 1
+fi
+
+for f in /sys/class/scsi_host/* ; do
+  connect_discovery_servers_on_host $f
+done
-- 
2.13.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-09 20:10 [PATCH] nvme_fc auto-connect scripts James Smart
@ 2017-10-09 20:13 ` James Smart
  2017-10-10 13:48 ` Johannes Thumshirn
  2017-10-27  7:48 ` Hannes Reinecke
  2 siblings, 0 replies; 10+ messages in thread
From: James Smart @ 2017-10-09 20:13 UTC (permalink / raw)


Sorry - this was meant to be posted as an RFC. Please treat it as such.

-- james

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-09 20:10 [PATCH] nvme_fc auto-connect scripts James Smart
  2017-10-09 20:13 ` James Smart
@ 2017-10-10 13:48 ` Johannes Thumshirn
  2017-10-11 11:39   ` Sagi Grimberg
  2017-10-27  7:48 ` Hannes Reinecke
  2 siblings, 1 reply; 10+ messages in thread
From: Johannes Thumshirn @ 2017-10-10 13:48 UTC (permalink / raw)


On Mon, Oct 09, 2017@01:10:32PM -0700, James Smart wrote:
> cp nvmefc_boot_connections.sh        /usr/sbin/lpfc/

[...]

> +ExecStart=/bin/sh -c "/usr/sbin/lpfc/nvmefc_boot_connections.sh"

/usr/sbin/lpfc wont get past our package reviewers, %_sbindir (for rpm, I'm
sure there is an equivalent for deb as well) should be used there.


Also wouldn't it make sense to have the scripts/services in nvme-cli?

Byte,
	Johannes
-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-10 13:48 ` Johannes Thumshirn
@ 2017-10-11 11:39   ` Sagi Grimberg
  0 siblings, 0 replies; 10+ messages in thread
From: Sagi Grimberg @ 2017-10-11 11:39 UTC (permalink / raw)



>> cp nvmefc_boot_connections.sh        /usr/sbin/lpfc/
> 
> [...]
> 
>> +ExecStart=/bin/sh -c "/usr/sbin/lpfc/nvmefc_boot_connections.sh"
> 
> /usr/sbin/lpfc wont get past our package reviewers, %_sbindir (for rpm, I'm
> sure there is an equivalent for deb as well) should be used there.
> 
> 
> Also wouldn't it make sense to have the scripts/services in nvme-cli?

It definitely would.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-09 20:10 [PATCH] nvme_fc auto-connect scripts James Smart
  2017-10-09 20:13 ` James Smart
  2017-10-10 13:48 ` Johannes Thumshirn
@ 2017-10-27  7:48 ` Hannes Reinecke
  2017-10-27 17:50   ` James Smart
  2 siblings, 1 reply; 10+ messages in thread
From: Hannes Reinecke @ 2017-10-27  7:48 UTC (permalink / raw)


On 10/09/2017 10:10 PM, James Smart wrote:
> Sagi recently posted a "one-shot" call to nvme-connect via a systemd
> script. FC has no problems operating with it assuming all the required
> connect fields are present (host_traddr required).
> 
> However FC is a bit more dynamic, and we've built up scripts that can
> be triggered whenever a new FC remote port with a discovery subsytem is
> detected. Whenever such a port is detected (by registration of the
> remoteport with the nvme_fc transport by the lldd), the transport
> generates a udev event.  A udev rule is created that parses the event
> information into the FC remote port (traddr) that supports a discovery
> subsystem and the FC host port that is connected to the FC remote port
> (host_traddr). The rule then invokes an instance-based systemd service
> to perform a "nvme connect-all" with the traddr and host-traddr info.
> 
> To resolve some issues seen by discovery of remote ports before the udev
> service was running and where the events were not replayed, there is an
> additional systemd service that runs once and invokes a script which
> locates the FC host ports on the system, looks for any connected FC
> remote ports with discovery subsystems, and invokes a connect-all for
> them.  The FC port information is specific to lpfc attributes currently,
> but when the shared transport is in place, they will be common.
> 
> I don't claim to be a systemd expert, so comments are certainly welcome.
> 
In general I'm not in favour of calling 'systemctl start' from a udev
rule; this typically will cause systemd to rearrange the internal
dependency chain, and might trigger various services to be enabled or
disabled.

The 'normal' flow of events is:

kernel device -> uevent -> udev rules -> systemd services

ie for the autoconnect it would be _far_ better if we could have an
event per NVMe lport device, as then we could call 'connect' on the
lport device.

So a better workflow would be to have the 'discovery' event trigger a
'change' (or 'add'?) event on the lports, and then we could have
per-lport udev rules triggering the actualy connect.

Let's see if I can come up with something along these lines.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-27  7:48 ` Hannes Reinecke
@ 2017-10-27 17:50   ` James Smart
  2017-10-28 13:34     ` Hannes Reinecke
  0 siblings, 1 reply; 10+ messages in thread
From: James Smart @ 2017-10-27 17:50 UTC (permalink / raw)


On 10/27/2017 12:48 AM, Hannes Reinecke wrote:
> On 10/09/2017 10:10 PM, James Smart wrote:
>> Sagi recently posted a "one-shot" call to nvme-connect via a systemd
>> script. FC has no problems operating with it assuming all the required
>> connect fields are present (host_traddr required).
>>
>> However FC is a bit more dynamic, and we've built up scripts that can
>> be triggered whenever a new FC remote port with a discovery subsytem is
>> detected. Whenever such a port is detected (by registration of the
>> remoteport with the nvme_fc transport by the lldd), the transport
>> generates a udev event.  A udev rule is created that parses the event
>> information into the FC remote port (traddr) that supports a discovery
>> subsystem and the FC host port that is connected to the FC remote port
>> (host_traddr). The rule then invokes an instance-based systemd service
>> to perform a "nvme connect-all" with the traddr and host-traddr info.
>>
>> To resolve some issues seen by discovery of remote ports before the udev
>> service was running and where the events were not replayed, there is an
>> additional systemd service that runs once and invokes a script which
>> locates the FC host ports on the system, looks for any connected FC
>> remote ports with discovery subsystems, and invokes a connect-all for
>> them.  The FC port information is specific to lpfc attributes currently,
>> but when the shared transport is in place, they will be common.
>>
>> I don't claim to be a systemd expert, so comments are certainly welcome.
>>
> In general I'm not in favour of calling 'systemctl start' from a udev
> rule; this typically will cause systemd to rearrange the internal
> dependency chain, and might trigger various services to be enabled or
> disabled.
>
> The 'normal' flow of events is:
>
> kernel device -> uevent -> udev rules -> systemd services
>
> ie for the autoconnect it would be _far_ better if we could have an
> event per NVMe lport device, as then we could call 'connect' on the
> lport device.
>
> So a better workflow would be to have the 'discovery' event trigger a
> 'change' (or 'add'?) event on the lports, and then we could have
> per-lport udev rules triggering the actualy connect.
>
> Let's see if I can come up with something along these lines.
>
> Cheers,
>
> Hannes

I'm certainly open to better options. udev/systemd is far from nice.? 
However, what I did has quite a bit of precedence.

I have no idea what you would scan on by just an lport. It's not enough 
for nvme-cli. You must have the lport and the rport information, which 
is passed via the udev event info and which must also be passed to nvme 
connect-all

-- james

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-27 17:50   ` James Smart
@ 2017-10-28 13:34     ` Hannes Reinecke
  2017-10-28 16:08       ` James Smart
  0 siblings, 1 reply; 10+ messages in thread
From: Hannes Reinecke @ 2017-10-28 13:34 UTC (permalink / raw)


On 10/27/2017 07:50 PM, James Smart wrote:
> On 10/27/2017 12:48 AM, Hannes Reinecke wrote:
>> On 10/09/2017 10:10 PM, James Smart wrote:
>>> Sagi recently posted a "one-shot" call to nvme-connect via a systemd
>>> script. FC has no problems operating with it assuming all the required
>>> connect fields are present (host_traddr required).
>>>
>>> However FC is a bit more dynamic, and we've built up scripts that can
>>> be triggered whenever a new FC remote port with a discovery subsytem is
>>> detected. Whenever such a port is detected (by registration of the
>>> remoteport with the nvme_fc transport by the lldd), the transport
>>> generates a udev event.? A udev rule is created that parses the event
>>> information into the FC remote port (traddr) that supports a discovery
>>> subsystem and the FC host port that is connected to the FC remote port
>>> (host_traddr). The rule then invokes an instance-based systemd service
>>> to perform a "nvme connect-all" with the traddr and host-traddr info.
>>>
>>> To resolve some issues seen by discovery of remote ports before the udev
>>> service was running and where the events were not replayed, there is an
>>> additional systemd service that runs once and invokes a script which
>>> locates the FC host ports on the system, looks for any connected FC
>>> remote ports with discovery subsystems, and invokes a connect-all for
>>> them.? The FC port information is specific to lpfc attributes currently,
>>> but when the shared transport is in place, they will be common.
>>>
>>> I don't claim to be a systemd expert, so comments are certainly welcome.
>>>
>> In general I'm not in favour of calling 'systemctl start' from a udev
>> rule; this typically will cause systemd to rearrange the internal
>> dependency chain, and might trigger various services to be enabled or
>> disabled.
>>
>> The 'normal' flow of events is:
>>
>> kernel device -> uevent -> udev rules -> systemd services
>>
>> ie for the autoconnect it would be _far_ better if we could have an
>> event per NVMe lport device, as then we could call 'connect' on the
>> lport device.
>>
>> So a better workflow would be to have the 'discovery' event trigger a
>> 'change' (or 'add'?) event on the lports, and then we could have
>> per-lport udev rules triggering the actualy connect.
>>
>> Let's see if I can come up with something along these lines.
>>
>> Cheers,
>>
>> Hannes
> 
> I'm certainly open to better options. udev/systemd is far from nice.?
> However, what I did has quite a bit of precedence.
> 
> I have no idea what you would scan on by just an lport. It's not enough
> for nvme-cli. You must have the lport and the rport information, which
> is passed via the udev event info and which must also be passed to nvme
> connect-all
> 
Oh, I know that.
What I would prefer is to have a single udev rule per nvme connection,
containing both lport and rport information.
That would relieve us of having additional configuration, and we can
enable/disable connection by just having that udev rule.
This works pretty well; we're doing it exactly the same way on S/390 for
the network and zfcp connections.
So if we have that then we only need to know when the lport/rport
combination is ready; once that is done we can call the udev rule.
Ideally we'll be getting an event containing both of these informations,
then udev will be selecting the correct rule automatically.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-28 13:34     ` Hannes Reinecke
@ 2017-10-28 16:08       ` James Smart
  2017-11-03  9:37         ` Hannes Reinecke
  0 siblings, 1 reply; 10+ messages in thread
From: James Smart @ 2017-10-28 16:08 UTC (permalink / raw)


On 10/28/2017 6:34 AM, Hannes Reinecke wrote:
>
> Oh, I know that.
> What I would prefer is to have a single udev rule per nvme connection,
> containing both lport and rport information.
> That would relieve us of having additional configuration, and we can
> enable/disable connection by just having that udev rule.
> This works pretty well; we're doing it exactly the same way on S/390 for
> the network and zfcp connections.
> So if we have that then we only need to know when the lport/rport
> combination is ready; once that is done we can call the udev rule.
> Ideally we'll be getting an event containing both of these informations,
> then udev will be selecting the correct rule automatically.
>
> Cheers,
>
> Hannes

The main reason for the systemd use was the potential duration of the 
resulting nvme connect-all. The scripts are simple if udev makes the 
nvme cli call and don't require anything other than a single generic 
rule. But given its potential stalls, very bad.

If I understand you correctly, what you describe requires apriori 
knowledge of the connections in order to add the udevs rule. This is a 
step backward from current FC behavior and what users are used to.

-- james

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-10-28 16:08       ` James Smart
@ 2017-11-03  9:37         ` Hannes Reinecke
  2017-11-03 10:24           ` Hannes Reinecke
  0 siblings, 1 reply; 10+ messages in thread
From: Hannes Reinecke @ 2017-11-03  9:37 UTC (permalink / raw)


On 10/28/2017 06:08 PM, James Smart wrote:
> On 10/28/2017 6:34 AM, Hannes Reinecke wrote:
>>
>> Oh, I know that.
>> What I would prefer is to have a single udev rule per nvme connection,
>> containing both lport and rport information.
>> That would relieve us of having additional configuration, and we can
>> enable/disable connection by just having that udev rule.
>> This works pretty well; we're doing it exactly the same way on S/390 for
>> the network and zfcp connections.
>> So if we have that then we only need to know when the lport/rport
>> combination is ready; once that is done we can call the udev rule.
>> Ideally we'll be getting an event containing both of these informations,
>> then udev will be selecting the correct rule automatically.
>>
>> Cheers,
>>
>> Hannes
> 
> The main reason for the systemd use was the potential duration of the
> resulting nvme connect-all. The scripts are simple if udev makes the
> nvme cli call and don't require anything other than a single generic
> rule. But given its potential stalls, very bad.
> 
> If I understand you correctly, what you describe requires apriori
> knowledge of the connections in order to add the udevs rule. This is a
> step backward from current FC behavior and what users are used to.
> 
So, here's my take for this:

# cat 65-nvme-fc-connect-nvmf-test-subsys1.rules
SUBSYSTEM!="block, GOTO="nvme_autoconnect_end"
ACTION!="add|change", GOTO="nvme_autoconnect_end"
ENV{FC_EVENT}!="nvmediscovery", GOTO="nvme_autoconnect_end"
ENV{NVMEFC_HOST_TRADDR}!="?*", GOTO="nvme_autoconnect_end"
ENV{NVMEFC_TRADDR}!="?*", GOTO="nvme_autoconnect_end"
RUN+="/usr/sbin/nvme connect -t fc \
--host-traddr=$env{NVMEFC_HOST_TRADDR} \
-n nvmf-test-subsys1 -a $env{NVMEFC_TRADDR}"
LABEL="nvme_autoconnect_end"

Obviously, we would need to filter for the appropriate values of traddr
and host_traddr to not attempt connections to non-existing namespaces.

However, I do see the problem of nvme connect stalls; if we were having
a systemd service we would solve this issue nicely, indeed.
But then we shouldn't activate the service, it should rather be
activated per default, but depend on a specifice NVMe-FC _target_, which
should be presented by the discovery event.

Let's see what I can cobble together here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] nvme_fc auto-connect scripts
  2017-11-03  9:37         ` Hannes Reinecke
@ 2017-11-03 10:24           ` Hannes Reinecke
  0 siblings, 0 replies; 10+ messages in thread
From: Hannes Reinecke @ 2017-11-03 10:24 UTC (permalink / raw)


On 11/03/2017 10:37 AM, Hannes Reinecke wrote:
> On 10/28/2017 06:08 PM, James Smart wrote:
[ .. ]
>>
>> The main reason for the systemd use was the potential duration of the
>> resulting nvme connect-all. The scripts are simple if udev makes the
>> nvme cli call and don't require anything other than a single generic
>> rule. But given its potential stalls, very bad.
>>
>> If I understand you correctly, what you describe requires apriori
>> knowledge of the connections in order to add the udevs rule. This is a
>> step backward from current FC behavior and what users are used to.
>>
> So, here's my take for this:
> 
> # cat 65-nvme-fc-connect-nvmf-test-subsys1.rules
> SUBSYSTEM!="block, GOTO="nvme_autoconnect_end"
> ACTION!="add|change", GOTO="nvme_autoconnect_end"
> ENV{FC_EVENT}!="nvmediscovery", GOTO="nvme_autoconnect_end"
> ENV{NVMEFC_HOST_TRADDR}!="?*", GOTO="nvme_autoconnect_end"
> ENV{NVMEFC_TRADDR}!="?*", GOTO="nvme_autoconnect_end"
> RUN+="/usr/sbin/nvme connect -t fc \
> --host-traddr=$env{NVMEFC_HOST_TRADDR} \
> -n nvmf-test-subsys1 -a $env{NVMEFC_TRADDR}"
> LABEL="nvme_autoconnect_end"
> 
> Obviously, we would need to filter for the appropriate values of traddr
> and host_traddr to not attempt connections to non-existing namespaces.
> 
> However, I do see the problem of nvme connect stalls; if we were having
> a systemd service we would solve this issue nicely, indeed.
> But then we shouldn't activate the service, it should rather be
> activated per default, but depend on a specifice NVMe-FC _target_, which
> should be presented by the discovery event.
> 
> Let's see what I can cobble together here.
> 
Upon further review, we actually _could_ avoid delays upon the 'nvme
connect' call if we were delegating the call to the connect workqueue:

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 1ca100a..6c36a93 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3302,28 +3302,7 @@ nvme_fc_init_ctrl(struct device *dev, struct
nvmf_ctrl_options *opts,

        nvme_fc_set_ctrl_devloss(ctrl, opts);

-       ret = nvme_fc_create_association(ctrl);
-       if (ret) {
-               ctrl->ctrl.opts = NULL;
-               /* initiate nvme ctrl ref counting teardown */
-               nvme_uninit_ctrl(&ctrl->ctrl);
-
-               /* Remove core ctrl ref. */
-               nvme_put_ctrl(&ctrl->ctrl);
-
-               /* as we're past the point where we transition to the ref
-                * counting teardown path, if we return a bad pointer here,
-                * the calling routine, thinking it's prior to the
-                * transition, will do an rport put. Since the teardown
-                * path also does a rport put, we do an extra get here to
-                * so proper order/teardown happens.
-                */
-               nvme_fc_rport_get(rport);
-
-               if (ret > 0)
-                       ret = -EIO;
-               return ERR_PTR(ret);
-       }
+       queue_delayed_work(nvme_fc_wq, &ctrl->connect_work, 0);

Then it should be perfectly possible to have a udev rule with just
kicking off the 'nvme connect', any eventual cleanup would be done by
either the number of reconnects being exhausted or dev_loss_tmo kicking in.
James?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-11-03 10:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-09 20:10 [PATCH] nvme_fc auto-connect scripts James Smart
2017-10-09 20:13 ` James Smart
2017-10-10 13:48 ` Johannes Thumshirn
2017-10-11 11:39   ` Sagi Grimberg
2017-10-27  7:48 ` Hannes Reinecke
2017-10-27 17:50   ` James Smart
2017-10-28 13:34     ` Hannes Reinecke
2017-10-28 16:08       ` James Smart
2017-11-03  9:37         ` Hannes Reinecke
2017-11-03 10:24           ` Hannes Reinecke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.