All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts
@ 2019-05-04  0:04 James Smart
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none" James Smart
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: James Smart @ 2019-05-04  0:04 UTC (permalink / raw)


This rfc is a combination of the nvme-fc auto-connect scripts
posted by Hannes and the RFC from Sagi to allow auto-connect for
persistent discovery controllers that send AENs. It does not
contain the code that handles the AEN and generates the event.

The auto-connect scripts are now commonized with the exception
of the nvme-fc boot connections service.  It's expected that both
the persistent discovery controllers and nvme-fc can post the same
udev event. Whenever the discovery aen generates an event, the
new --device=<name> argument will be used to identify the persistent
discovery controller. Other connect attributes will be no-op'd by a
"none" value.  When nvme-fc generates it's events, it will specify
--device=none and specify the discovery controller addressing
arguments.  If a persistent discovery controller exists on nvme-fc,
will do nothing special and AEN's from it will follow the
--device=<name> syntax.

The udev event handler supports both the new event to be used
by discovery controllers. The nvme-fc transport will be migrated
to issue the new event syntax as well. But the udev script will
continue to support the older style nvme-fc event info in case
it's running against an older kernel.

Feedback is welcome.


James Smart (3):
  fabrics: ignore arguments that pass in "none"
  fabrics: allow discover to address discovery controller by persistent
    name
  nvmf auto-connect scripts

Sagi Grimberg (2):
  fabrics: support persistent connections to a discovery controller
  fabrics: add --quiet option

 Makefile                                         | 10 +++-
 fabrics.c                                        | 70 +++++++++++++++++-------
 fabrics.h                                        |  2 +
 nvme.spec.in                                     |  8 +++
 nvmf-autoconnect/70-nvmf-autoconnect.conf        |  1 +
 nvmf-autoconnect/70-nvmf-autoconnect.rules       | 18 ++++++
 nvmf-autoconnect/nvmefc-boot-connections.service |  9 +++
 nvmf-autoconnect/nvmf-connect.target             |  2 +
 nvmf-autoconnect/nvmf-connect at .service           | 14 +++++
 9 files changed, 114 insertions(+), 20 deletions(-)
 create mode 100644 nvmf-autoconnect/70-nvmf-autoconnect.conf
 create mode 100644 nvmf-autoconnect/70-nvmf-autoconnect.rules
 create mode 100644 nvmf-autoconnect/nvmefc-boot-connections.service
 create mode 100644 nvmf-autoconnect/nvmf-connect.target
 create mode 100644 nvmf-autoconnect/nvmf-connect at .service

-- 
2.13.7

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none"
  2019-05-04  0:04 [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts James Smart
@ 2019-05-04  0:04 ` James Smart
  2019-05-07  7:08   ` Hannes Reinecke
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 2/5] fabrics: support persistent connections to a discovery controller James Smart
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: James Smart @ 2019-05-04  0:04 UTC (permalink / raw)


As we want to support discovery uevents over different
transports, we want to allow the kernel to provide missing
information in the form of none and have nvme-cli properly
ignore it.

One example is the host_traddr. If it is not set (which means
that the default route determined the host address) we will
want to do the same for newly discovered controllers.

So allow users to pass 'none' arguments as well.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
Signed-off-by: James Smart <jsmart2021 at gmail.com>
---
Patch originated from Sagi in an RFC
Fixed typo in Sagi's patch on cfg.hostid
---
 fabrics.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index 511de06..c6ff734 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -565,7 +565,7 @@ add_argument(char **argstr, int *max_len, char *arg_str, char *arg)
 {
 	int len;
 
-	if (arg) {
+	if (arg && strcmp(arg, "none")) {
 		len = snprintf(*argstr, *max_len, ",%s=%s", arg_str, arg);
 		if (len < 0)
 			return -EINVAL;
@@ -658,14 +658,14 @@ retry:
 		return -EINVAL;
 	p += len;
 
-	if (cfg.hostnqn) {
+	if (cfg.hostnqn && strcmp(cfg.hostnqn, "none")) {
 		len = sprintf(p, ",hostnqn=%s", cfg.hostnqn);
 		if (len < 0)
 			return -EINVAL;
 		p += len;
 	}
 
-	if (cfg.hostid) {
+	if (cfg.hostid && strcmp(cfg.hostid, "none")) {
 		len = sprintf(p, ",hostid=%s", cfg.hostid);
 		if (len < 0)
 			return -EINVAL;
@@ -700,7 +700,7 @@ retry:
 		p += len;
 	}
 
-	if (cfg.host_traddr) {
+	if (cfg.host_traddr && strcmp(cfg.host_traddr, "none")) {
 		len = sprintf(p, ",host_traddr=%s", cfg.host_traddr);
 		if (len < 0)
 			return -EINVAL;
-- 
2.13.7

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 2/5] fabrics: support persistent connections to a discovery controller
  2019-05-04  0:04 [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts James Smart
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none" James Smart
@ 2019-05-04  0:04 ` James Smart
  2019-05-07  7:11   ` Hannes Reinecke
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name James Smart
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: James Smart @ 2019-05-04  0:04 UTC (permalink / raw)


From: Sagi Grimberg <sagi@grimberg.me>

Simply don't destroy the discovery controller after getting the
log pages. Note that persistent connection to a discovery subsystem
require to pass in a non-zero kato value, so if not provided we
simply use a default of 30 seconds kato.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>

---
resolved merge error
---
 fabrics.c | 15 ++++++++++++---
 fabrics.h |  2 ++
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index c6ff734..6fe1343 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -65,6 +65,7 @@ static struct config {
 	int  disable_sqflow;
 	int  hdr_digest;
 	int  data_digest;
+	bool persistent;
 } cfg = { NULL };
 
 #define BUF_SIZE		4096
@@ -858,9 +859,11 @@ static int do_discover(char *argstr, bool connect)
 		return -errno;
 	ret = nvmf_get_log_page_discovery(dev_name, &log, &numrec);
 	free(dev_name);
-	err = remove_ctrl(instance);
-	if (err)
-		return err;
+	if (!cfg.persistent) {
+		err = remove_ctrl(instance);
+		if (err)
+			return err;
+	}
 
 	switch (ret) {
 	case DISC_OK:
@@ -936,6 +939,9 @@ static int discover_from_conf_file(const char *desc, char *argstr,
 		if (err)
 			continue;
 
+		if (cfg.persistent && !cfg.keep_alive_tmo)
+			cfg.keep_alive_tmo = NVMF_DEF_DISC_TMO;
+
 		err = build_options(argstr, BUF_SIZE);
 		if (err) {
 			ret = err;
@@ -978,6 +984,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		{"nr-write-queues", 'W', "LIST", CFG_INT, &cfg.nr_write_queues,    required_argument, "number of write queues to use (default 0)" },
 		{"nr-poll-queues",  'P', "LIST", CFG_INT, &cfg.nr_poll_queues,    required_argument, "number of poll queues to use (default 0)" },
 		{"queue-size",      'Q', "LIST", CFG_INT, &cfg.queue_size,      required_argument, "number of io queue elements to use (default 128)" },
+		{"persistent",  'p', "LIST", CFG_NONE, &cfg.persistent,  no_argument, "persistent discovery connection" },
 		{NULL},
 	};
 
@@ -992,6 +999,8 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		return discover_from_conf_file(desc, argstr,
 				command_line_options, connect);
 	} else {
+		if (cfg.persistent && !cfg.keep_alive_tmo)
+			cfg.keep_alive_tmo = NVMF_DEF_DISC_TMO;
 		ret = build_options(argstr, BUF_SIZE);
 		if (ret)
 			return ret;
diff --git a/fabrics.h b/fabrics.h
index 988f3ef..7c1664b 100644
--- a/fabrics.h
+++ b/fabrics.h
@@ -1,6 +1,8 @@
 #ifndef _DISCOVER_H
 #define _DISCOVER_H
 
+#define NVMF_DEF_DISC_TMO	30
+
 extern int discover(const char *desc, int argc, char **argv, bool connect);
 extern int connect(const char *desc, int argc, char **argv);
 extern int disconnect(const char *desc, int argc, char **argv);
-- 
2.13.7

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-04  0:04 [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts James Smart
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none" James Smart
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 2/5] fabrics: support persistent connections to a discovery controller James Smart
@ 2019-05-04  0:04 ` James Smart
  2019-05-07  7:13   ` Hannes Reinecke
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 4/5] fabrics: add --quiet option James Smart
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts James Smart
  4 siblings, 1 reply; 22+ messages in thread
From: James Smart @ 2019-05-04  0:04 UTC (permalink / raw)


To support discovery (connect/connect-all) to operate against a
persistent discovery controller, let the discovery controller to
be specified by its device node name rather than new connection
attributes.

Also centralize extraction of controller instance from the controller
name to a common helper.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
Signed-off-by: James Smart <jsmart2021 at gmail.com>
---
Patch originated from Sagi in an RFC
fix merge issue
fix contoller->controller
add cfg.device check for "none"
---
 fabrics.c | 35 +++++++++++++++++++++++++----------
 1 file changed, 25 insertions(+), 10 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index 6fe1343..a80c11c 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -189,6 +189,19 @@ static const char *cms_str(__u8 cm)
 
 static int do_discover(char *argstr, bool connect);
 
+static int ctrl_instance(char *device)
+{
+	int ret, instance;
+
+	device = basename(device);
+	ret = sscanf(device, "nvme%d", &instance);
+	if (ret < 0)
+		return ret;
+	if (!ret)
+		return -1;
+	return instance;
+}
+
 static int add_ctrl(const char *argstr)
 {
 	substring_t args[MAX_OPT_ARGS];
@@ -851,7 +864,10 @@ static int do_discover(char *argstr, bool connect)
 	char *dev_name;
 	int instance, numrec = 0, ret, err;
 
-	instance = add_ctrl(argstr);
+	if (!cfg.device)
+		instance = add_ctrl(argstr);
+	else
+		instance = ctrl_instance(cfg.device);
 	if (instance < 0)
 		return instance;
 
@@ -859,7 +875,7 @@ static int do_discover(char *argstr, bool connect)
 		return -errno;
 	ret = nvmf_get_log_page_discovery(dev_name, &log, &numrec);
 	free(dev_name);
-	if (!cfg.persistent) {
+	if (!cfg.device && !cfg.persistent) {
 		err = remove_ctrl(instance);
 		if (err)
 			return err;
@@ -975,6 +991,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		{"hostnqn",     'q', "LIST", CFG_STRING, &cfg.hostnqn,     required_argument, "user-defined hostnqn (if default not used)" },
 		{"hostid",      'I', "LIST", CFG_STRING, &cfg.hostid,      required_argument, "user-defined hostid (if default not used)"},
 		{"raw",         'r', "LIST", CFG_STRING, &cfg.raw,         required_argument, "raw output file" },
+		{"device",      'd', "LIST", CFG_STRING, &cfg.device, required_argument, "use existing discovery controller device" },
 		{"keep-alive-tmo",  'k', "LIST", CFG_INT, &cfg.keep_alive_tmo,  required_argument, "keep alive timeout period in seconds" },
 		{"reconnect-delay", 'c', "LIST", CFG_INT, &cfg.reconnect_delay, required_argument, "reconnect timeout period in seconds" },
 		{"ctrl-loss-tmo",   'l', "LIST", CFG_INT, &cfg.ctrl_loss_tmo,   required_argument, "controller loss timeout period in seconds" },
@@ -993,6 +1010,9 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 	if (ret)
 		return ret;
 
+	if (cfg.device && !strcmp(cfg.device, "none"))
+		cfg.device = NULL;
+
 	cfg.nqn = NVME_DISC_SUBSYS_NAME;
 
 	if (!cfg.transport && !cfg.traddr) {
@@ -1130,15 +1150,10 @@ static int disconnect_by_nqn(char *nqn)
 static int disconnect_by_device(char *device)
 {
 	int instance;
-	int ret;
-
-	device = basename(device);
-	ret = sscanf(device, "nvme%d", &instance);
-	if (ret < 0)
-		return ret;
-	if (!ret)
-		return -1;
 
+	instance = ctrl_instance(device);
+	if (instance < 0)
+		return instance;
 	return remove_ctrl(instance);
 }
 
-- 
2.13.7

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 4/5] fabrics: add --quiet option
  2019-05-04  0:04 [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts James Smart
                   ` (2 preceding siblings ...)
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name James Smart
@ 2019-05-04  0:04 ` James Smart
  2019-05-07  7:14   ` Hannes Reinecke
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts James Smart
  4 siblings, 1 reply; 22+ messages in thread
From: James Smart @ 2019-05-04  0:04 UTC (permalink / raw)


From: Sagi Grimberg <sagi@grimberg.me>

Now we are going to have discovery log change events, so
instead of trying to figure out which NVM subsystem ports
are already connected, we let the driver do that and
only suppress the failure messages.

This option will be used by the discovery log change corresponding
udev rule.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 fabrics.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index a80c11c..dc6b824 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -66,6 +66,7 @@ static struct config {
 	int  hdr_digest;
 	int  data_digest;
 	bool persistent;
+	bool quiet;
 } cfg = { NULL };
 
 #define BUF_SIZE		4096
@@ -216,9 +217,11 @@ static int add_ctrl(const char *argstr)
 		goto out;
 	}
 
-	if (write(fd, argstr, len) != len) {
-		fprintf(stderr, "Failed to write to %s: %s\n",
-			 PATH_NVME_FABRICS, strerror(errno));
+	ret = write(fd, argstr, len);
+	if (ret != len) {
+		if (errno != EALREADY || !cfg.quiet)
+			fprintf(stderr, "Failed to write to %s: %s\n",
+				 PATH_NVME_FABRICS, strerror(errno));
 		ret = -errno;
 		goto out_close;
 	}
@@ -840,6 +843,10 @@ static int connect_ctrls(struct nvmf_disc_rsp_page_hdr *log, int numrec)
 		/* already connected print message	*/
 		if (instance == -EALREADY) {
 			const char *traddr = log->entries[i].traddr;
+
+			if (cfg.quiet)
+				continue;
+
 			fprintf(stderr,
 				"traddr=%.*s is already connected\n",
 				space_strip_len(NVMF_TRADDR_SIZE, traddr),
@@ -1002,6 +1009,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		{"nr-poll-queues",  'P', "LIST", CFG_INT, &cfg.nr_poll_queues,    required_argument, "number of poll queues to use (default 0)" },
 		{"queue-size",      'Q', "LIST", CFG_INT, &cfg.queue_size,      required_argument, "number of io queue elements to use (default 128)" },
 		{"persistent",  'p', "LIST", CFG_NONE, &cfg.persistent,  no_argument, "persistent discovery connection" },
+		{"quiet",       'Q', "LIST", CFG_NONE, &cfg.quiet,  no_argument, "suppress already connected errors" },
 		{NULL},
 	};
 
-- 
2.13.7

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts
  2019-05-04  0:04 [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts James Smart
                   ` (3 preceding siblings ...)
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 4/5] fabrics: add --quiet option James Smart
@ 2019-05-04  0:04 ` James Smart
  2019-05-07  7:17   ` Hannes Reinecke
                     ` (2 more replies)
  4 siblings, 3 replies; 22+ messages in thread
From: James Smart @ 2019-05-04  0:04 UTC (permalink / raw)


This set of scripts is a combination of those sent by Hannes, Sagi,
and I in earlier patches and RFC's.

Auto-connect operates by the nvme core layer or nvme-fc transport
generating a udev event with directory-controller addressing
information. The nvme core layer generates an event when a
persistent discovery controller generates a Discovery Log Change
Notification AEN.  The nvme-fc transport generates an event when
an FC rport that has a NVME Discovery controller is detected or
when a FC state change event occurs for for an FC rport that has
a NVME Discovery controller

The udev event is handled by a script that extracts the Discovery
controller addressing information and initiates a systemd service
to perform a "nvme connect-all" to the Discovery controller.
The "nvme connect-all" request is not called directly from the udev
handler itself as the request may take some time or stall altogether,
which would block other udev event handling.  By transitioning to
a sytemd service, the call can take as much time as needed to
complete.

The scripts consist of:
- A udev script that handles nvme core and nvme-fc udev events.
  The udev handler starts a nvmf-connect systemd service.
- A nvmf-connect systemd service. The service, in its instance
  name, is passed the connect arguments for the discovery
  controller. The service performs a "nvme connect-all" to the
  discovery controller.
- A nvmefc-boot-connections systemd service. This is a run-once
  service run after udev is enabled, which will replay events
  generated by NVME-FC devices detected during boot while udev
  is not yet running.
- To stop autoconnect an additional nvmefc-connect.target has
  been added, which will instruct systemd to cancel all
  outstanding autoconnect services.

Note: Although the nvme-fc subsystem is converting to use the
  same nvme core layer event mechanism, the nvme-fc-specific
  udev event that has been in existence for a while is contained
  in in the script so that the utilities may run against older
  kernels.

Signed-off-by: James Smart <jsmart2021 at gmail.com>
CC: Simon Schricker <sschricker at suse.com>
CC: Hannes Reinecke <hare at suse.com>
CC: Sagi Grimberg <sagi at grimberg.me>
---
I could use some recommendations on where to install the
70-nvmf-autoconnect.conf and nvmf-connect.target files
---
 Makefile                                         | 10 +++++++++-
 nvme.spec.in                                     |  8 ++++++++
 nvmf-autoconnect/70-nvmf-autoconnect.conf        |  1 +
 nvmf-autoconnect/70-nvmf-autoconnect.rules       | 18 ++++++++++++++++++
 nvmf-autoconnect/nvmefc-boot-connections.service |  9 +++++++++
 nvmf-autoconnect/nvmf-connect.target             |  2 ++
 nvmf-autoconnect/nvmf-connect at .service           | 14 ++++++++++++++
 7 files changed, 61 insertions(+), 1 deletion(-)
 create mode 100644 nvmf-autoconnect/70-nvmf-autoconnect.conf
 create mode 100644 nvmf-autoconnect/70-nvmf-autoconnect.rules
 create mode 100644 nvmf-autoconnect/nvmefc-boot-connections.service
 create mode 100644 nvmf-autoconnect/nvmf-connect.target
 create mode 100644 nvmf-autoconnect/nvmf-connect at .service

diff --git a/Makefile b/Makefile
index 4bfbebb..a86751e 100644
--- a/Makefile
+++ b/Makefile
@@ -8,6 +8,8 @@ DESTDIR =
 PREFIX ?= /usr/local
 SYSCONFDIR = /etc
 SBINDIR = $(PREFIX)/sbin
+SYSTEMDDIR = /usr/lib/systemd
+UDEVDIR = /usr/lib/udev
 LIB_DEPENDS =
 
 ifeq ($(LIBUUID),0)
@@ -86,6 +88,12 @@ install-bash-completion:
 	$(INSTALL) -d $(DESTDIR)$(PREFIX)/share/bash-completion/completions
 	$(INSTALL) -m 644 -T ./completions/bash-nvme-completion.sh $(DESTDIR)$(PREFIX)/share/bash-completion/completions/nvme
 
+install-udev:
+	$(INSTALL) -d $(DESTDIR)$(SYSTEMDDIR)/system
+	$(INSTALL) -m 644 ./nvmf-autoconnect/*.service $(DESTDIR)$(SYSTEMDDIR)/system
+	$(INSTALL) -d $(DESTDIR)$(UDEVDIR)/rules.d
+	$(INSTALL) -m 644 ./nvmf-autoconnect/*.rules $(DESTDIR)$(UDEVDIR)/rules.d
+
 install-zsh-completion:
 	$(INSTALL) -d $(DESTDIR)$(PREFIX)/share/zsh/site-functions
 	$(INSTALL) -m 644 -T ./completions/_nvme $(DESTDIR)$(PREFIX)/share/zsh/site-functions/_nvme
@@ -106,7 +114,7 @@ install-etc:
 		$(INSTALL) -m 644 -T ./etc/discovery.conf.in $(DESTDIR)$(SYSCONFDIR)/nvme/discovery.conf; \
 	fi
 
-install-spec: install-bin install-man install-bash-completion install-zsh-completion install-etc
+install-spec: install-bin install-man install-bash-completion install-zsh-completion install-etc install-udev
 install: install-spec install-hostparams
 
 nvme.spec: nvme.spec.in NVME-VERSION-FILE
diff --git a/nvme.spec.in b/nvme.spec.in
index 6934f8f..40a5eff 100644
--- a/nvme.spec.in
+++ b/nvme.spec.in
@@ -35,6 +35,9 @@ make install-spec DESTDIR=%{buildroot} PREFIX=/usr
 %{_sysconfdir}/nvme/hostnqn
 %{_sysconfdir}/nvme/hostid
 %{_sysconfdir}/nvme/discovery.conf
+/usr/lib/udev/rules.d/70-nvmf-autoconnect.rules
+/usr/lib/systemd/system/nvmf-connect at .service
+/usr/lib/systemd/system/nvmefc-boot-connections.service
 
 %clean
 rm -rf $RPM_BUILD_ROOT
@@ -47,6 +50,11 @@ if [ $1 -eq 1 ]; then # 1 : This package is being installed for the first time
         if [ ! -s %{_sysconfdir}/nvme/hostid ]; then
                 uuidgen > %{_sysconfdir}/nvme/hostid
         fi
+
+	# apply udev and systemd changes that we did
+	udevadm control --reload-rules && udevadm trigger
+	systemctl daemon-reload
+ fi
 fi
 
 %changelog
diff --git a/nvmf-autoconnect/70-nvmf-autoconnect.conf b/nvmf-autoconnect/70-nvmf-autoconnect.conf
new file mode 100644
index 0000000..844f3d9
--- /dev/null
+++ b/nvmf-autoconnect/70-nvmf-autoconnect.conf
@@ -0,0 +1 @@
+install_items+="/usr/lib/udev/rules.d/70-nvmf-autoconnect.rules"
diff --git a/nvmf-autoconnect/70-nvmf-autoconnect.rules b/nvmf-autoconnect/70-nvmf-autoconnect.rules
new file mode 100644
index 0000000..5405314
--- /dev/null
+++ b/nvmf-autoconnect/70-nvmf-autoconnect.rules
@@ -0,0 +1,18 @@
+#
+# nvmf-autoconnect.rules:
+#   Handles udev events which invoke automatically scan via discovery
+#   controller and connect to elements in the discovery log.
+#
+#
+
+# Events from persistent discovery controllers or nvme-fc transport events
+ACTION=="change", SUBSYSTEM=="nvme", ENV{NVME_EVENT}=="discovery",\
+  ENV{NVME_CTLR_NAME}=="*", ENV{NVME_TRTYPE}=="*", ENV{NVME_TRADDR}=="*", \
+  ENV{NVME_TRSVCID}=="*", ENV{NVME_HOST_TRADDR}=="*", \
+  RUN+="/usr/bin/systemctl --no-block start nvmf-connect at --device=$env{NVME_CTLR_NAME}\t--transport=$env{NVME_TRTYPE}\t--traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service"
+
+# nvme-fc transport generated events (old-style for compatibility)
+ACTION=="change", SUBSYSTEM=="fc", ENV{FC_EVENT}=="nvmediscovery", \
+  ENV{NVMEFC_HOST_TRADDR}=="*",  ENV{NVMEFC_TRADDR}=="*", \
+  RUN+="/usr/bin/systemctl --no-block start nvmf-connect at --device=none\t--transport=fc\t--traddr=$env{NVMEFC_TRADDR}\t--trsvcid=none\t--host-traddr=$env{NVMEFC_HOST_TRADDR}.service"
+
diff --git a/nvmf-autoconnect/nvmefc-boot-connections.service b/nvmf-autoconnect/nvmefc-boot-connections.service
new file mode 100644
index 0000000..84f6486
--- /dev/null
+++ b/nvmf-autoconnect/nvmefc-boot-connections.service
@@ -0,0 +1,9 @@
+[Unit]
+Description=Auto-connect to subsystems on FC-NVME devices found during boot
+
+[Service]
+Type=oneshot
+ExecStart=/bin/sh -c "echo add > /sys/class/fc/fc_udev_device/nvme_discovery"
+
+[Install]
+WantedBy=default.target
diff --git a/nvmf-autoconnect/nvmf-connect.target b/nvmf-autoconnect/nvmf-connect.target
new file mode 100644
index 0000000..f64a37c
--- /dev/null
+++ b/nvmf-autoconnect/nvmf-connect.target
@@ -0,0 +1,2 @@
+[Unit]
+Description=All instances of nvmf-autoconnect daemon
diff --git a/nvmf-autoconnect/nvmf-connect at .service b/nvmf-autoconnect/nvmf-connect at .service
new file mode 100644
index 0000000..7d9f856
--- /dev/null
+++ b/nvmf-autoconnect/nvmf-connect at .service
@@ -0,0 +1,14 @@
+#
+# Unit file used by 70-nvmf-autoconnect.rules.
+#
+
+[Unit]
+Description=NVMf auto-connect scan upon nvme discovery controller Events
+After=syslog.target
+PartOf=nvmf-connect.target
+Requires=nvmf-connect.target
+
+[Service]
+Type=simple
+Environment="CONNECT_ARGS=%i"
+ExecStart=/bin/sh -c "/usr/sbin/nvme connect-all `/usr/bin/echo -e $CONNECT_ARGS`"
-- 
2.13.7

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none"
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none" James Smart
@ 2019-05-07  7:08   ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-07  7:08 UTC (permalink / raw)


On 5/4/19 2:04 AM, James Smart wrote:
> As we want to support discovery uevents over different
> transports, we want to allow the kernel to provide missing
> information in the form of none and have nvme-cli properly
> ignore it.
> 
> One example is the host_traddr. If it is not set (which means
> that the default route determined the host address) we will
> want to do the same for newly discovered controllers.
> 
> So allow users to pass 'none' arguments as well.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> Signed-off-by: James Smart <jsmart2021 at gmail.com>
> ---
> Patch originated from Sagi in an RFC
> Fixed typo in Sagi's patch on cfg.hostid
> ---
>   fabrics.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare at suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 2/5] fabrics: support persistent connections to a discovery controller
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 2/5] fabrics: support persistent connections to a discovery controller James Smart
@ 2019-05-07  7:11   ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-07  7:11 UTC (permalink / raw)


On 5/4/19 2:04 AM, James Smart wrote:
> From: Sagi Grimberg <sagi at grimberg.me>
> 
> Simply don't destroy the discovery controller after getting the
> log pages. Note that persistent connection to a discovery subsystem
> require to pass in a non-zero kato value, so if not provided we
> simply use a default of 30 seconds kato.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> 
Reviewed-by: Hannes Reinecke <hare at suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name James Smart
@ 2019-05-07  7:13   ` Hannes Reinecke
  2019-05-07 16:56     ` James Smart
  0 siblings, 1 reply; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-07  7:13 UTC (permalink / raw)


On 5/4/19 2:04 AM, James Smart wrote:
> To support discovery (connect/connect-all) to operate against a
> persistent discovery controller, let the discovery controller to
> be specified by its device node name rather than new connection
> attributes.
> 
> Also centralize extraction of controller instance from the controller
> name to a common helper.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> Signed-off-by: James Smart <jsmart2021 at gmail.com>
> ---
> Patch originated from Sagi in an RFC
> fix merge issue
> fix contoller->controller
> add cfg.device check for "none"
> ---
>   fabrics.c | 35 +++++++++++++++++++++++++----------
>   1 file changed, 25 insertions(+), 10 deletions(-)
> 
Actually, I would rather scan existing controllers, and use the first 
one that matches the arguments.
With the current design we would have different calling sequences, 
depending on whether it's the first call (which create the controller), 
or any subsequent call.
If we were looking up the existing controller from the arguments we 
could be using the same commandline throughout.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 4/5] fabrics: add --quiet option
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 4/5] fabrics: add --quiet option James Smart
@ 2019-05-07  7:14   ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-07  7:14 UTC (permalink / raw)


On 5/4/19 2:04 AM, James Smart wrote:
> From: Sagi Grimberg <sagi at grimberg.me>
> 
> Now we are going to have discovery log change events, so
> instead of trying to figure out which NVM subsystem ports
> are already connected, we let the driver do that and
> only suppress the failure messages.
> 
> This option will be used by the discovery log change corresponding
> udev rule.
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>   fabrics.c | 14 +++++++++++---
>   1 file changed, 11 insertions(+), 3 deletions(-)
> 
Signed-off-by: Hannes Reinecke <hare at suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts James Smart
@ 2019-05-07  7:17   ` Hannes Reinecke
  2019-05-07 20:26   ` James Smart
  2019-05-12 15:15   ` Sagi Grimberg
  2 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-07  7:17 UTC (permalink / raw)


On 5/4/19 2:04 AM, James Smart wrote:
> This set of scripts is a combination of those sent by Hannes, Sagi,
> and I in earlier patches and RFC's.
> 
> Auto-connect operates by the nvme core layer or nvme-fc transport
> generating a udev event with directory-controller addressing
> information. The nvme core layer generates an event when a
> persistent discovery controller generates a Discovery Log Change
> Notification AEN.  The nvme-fc transport generates an event when
> an FC rport that has a NVME Discovery controller is detected or
> when a FC state change event occurs for for an FC rport that has
> a NVME Discovery controller
> 
> The udev event is handled by a script that extracts the Discovery
> controller addressing information and initiates a systemd service
> to perform a "nvme connect-all" to the Discovery controller.
> The "nvme connect-all" request is not called directly from the udev
> handler itself as the request may take some time or stall altogether,
> which would block other udev event handling.  By transitioning to
> a sytemd service, the call can take as much time as needed to
> complete.
> 
> The scripts consist of:
> - A udev script that handles nvme core and nvme-fc udev events.
>    The udev handler starts a nvmf-connect systemd service.
> - A nvmf-connect systemd service. The service, in its instance
>    name, is passed the connect arguments for the discovery
>    controller. The service performs a "nvme connect-all" to the
>    discovery controller.
> - A nvmefc-boot-connections systemd service. This is a run-once
>    service run after udev is enabled, which will replay events
>    generated by NVME-FC devices detected during boot while udev
>    is not yet running.
> - To stop autoconnect an additional nvmefc-connect.target has
>    been added, which will instruct systemd to cancel all
>    outstanding autoconnect services.
> 
> Note: Although the nvme-fc subsystem is converting to use the
>    same nvme core layer event mechanism, the nvme-fc-specific
>    udev event that has been in existence for a while is contained
>    in in the script so that the utilities may run against older
>    kernels.
> 
> Signed-off-by: James Smart <jsmart2021 at gmail.com>
> CC: Simon Schricker <sschricker at suse.com>
> CC: Hannes Reinecke <hare at suse.com>
> CC: Sagi Grimberg <sagi at grimberg.me>
> ---
> I could use some recommendations on where to install the
> 70-nvmf-autoconnect.conf and nvmf-connect.target files
70-nvmf-autoconnect.conf needs to go into the dracut directory
(ie either /etc/dracut.conf.d or /usr/lib/dracut/dracut.conf.d),
and nvmf-connect.target needs to go into /usr/lib/systemd/system.

For the remainder:

Reviewed-by: Hannes Reinecke <hare at suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-07  7:13   ` Hannes Reinecke
@ 2019-05-07 16:56     ` James Smart
  2019-05-08  6:10       ` Hannes Reinecke
  2019-05-12 15:00       ` Sagi Grimberg
  0 siblings, 2 replies; 22+ messages in thread
From: James Smart @ 2019-05-07 16:56 UTC (permalink / raw)


On 5/7/2019 12:13 AM, Hannes Reinecke wrote:
> On 5/4/19 2:04 AM, James Smart wrote:
>> To support discovery (connect/connect-all) to operate against a
>> persistent discovery controller, let the discovery controller to
>> be specified by its device node name rather than new connection
>> attributes.
>>
>> Also centralize extraction of controller instance from the controller
>> name to a common helper.
>>
>> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
>> Signed-off-by: James Smart <jsmart2021 at gmail.com>
>> ---
>> Patch originated from Sagi in an RFC
>> fix merge issue
>> fix contoller->controller
>> add cfg.device check for "none"
>> ---
>> ? fabrics.c | 35 +++++++++++++++++++++++++----------
>> ? 1 file changed, 25 insertions(+), 10 deletions(-)
>>
> Actually, I would rather scan existing controllers, and use the first 
> one that matches the arguments.
> With the current design we would have different calling sequences, 
> depending on whether it's the first call (which create the 
> controller), or any subsequent call.
> If we were looking up the existing controller from the arguments we 
> could be using the same commandline throughout.
>

Well, this is where we're at odds.? We need to decide if we should 
create another full controller instance when the event occurs, which is 
what FC does today (can have multiple running concurrently), or whether 
we want to reuse the persistent one. I would think implementers of the 
persistent discovery controller would want you to reuse it and not have 
to support more concurrent discovery controller instances.? The "reuse" 
case is why passing the --device arg makes it very straight forward.? We 
could always have the utility get the addressing info and look at 
existing controller and decide to use an existing one - I assume we 
would only select persistent ones in order to avoid delete race 
conditions, but seems like a lot of work for what the AEN already 
knows.? I do have a little concern that there's always a window from 
when the event is posted to when the /dev devicename is opened, which 
allows for the device name to be terminated and possibly reused for 
something else, but I guess we're willing to live with this.

And as for "not scanning" when you connect to the persistent discovery 
controller, and instead acting off of a pseudo-AEN posted to udev - it's 
I nice premise, but I'm not sure what it buys you. At best, it's a 
difference in how you connect to the persistent controller - e.g. 
"--connect" vs "--connect-all". Given all the logic is there and present 
with connect-all, I don't know why you wouldn't just use it.? We can 
always decide to post additional AENs in the kernel if needed (and there 
will always be races of multiple AEN's kicking off multiple rescans 
concurrently).

-- james

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts James Smart
  2019-05-07  7:17   ` Hannes Reinecke
@ 2019-05-07 20:26   ` James Smart
  2019-05-09 15:40     ` Hannes Reinecke
  2019-05-12 15:15   ` Sagi Grimberg
  2 siblings, 1 reply; 22+ messages in thread
From: James Smart @ 2019-05-07 20:26 UTC (permalink / raw)



> +# Events from persistent discovery controllers or nvme-fc transport events
> +ACTION=="change", SUBSYSTEM=="nvme", ENV{NVME_EVENT}=="discovery",\
> +  ENV{NVME_CTLR_NAME}=="*", ENV{NVME_TRTYPE}=="*", ENV{NVME_TRADDR}=="*", \
> +  ENV{NVME_TRSVCID}=="*", ENV{NVME_HOST_TRADDR}=="*", \
> +  RUN+="/usr/bin/systemctl --no-block start nvmf-connect at --device=$env{NVME_CTLR_NAME}\t--transport=$env{NVME_TRTYPE}\t--traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service"
> +
> +# nvme-fc transport generated events (old-style for compatibility)
> +ACTION=="change", SUBSYSTEM=="fc", ENV{FC_EVENT}=="nvmediscovery", \
> +  ENV{NVMEFC_HOST_TRADDR}=="*",  ENV{NVMEFC_TRADDR}=="*", \
> +  RUN+="/usr/bin/systemctl --no-block start nvmf-connect at --device=none\t--transport=fc\t--traddr=$env{NVMEFC_TRADDR}\t--trsvcid=none\t--host-traddr=$env{NVMEFC_HOST_TRADDR}.service"
>

I assume, that we should be adding --quiet to the above lines.? No need 
to get it from the event.? Agree ?

And we would not add --persistent.?? Which makes me assume --persistent 
would be left to manual/scripted calls or possibly pulled in by the 
"defaults" file we were discussing.??? So, the linux policy is our 
default discovery controller is non-persistent and requires admin 
actions to enable the persistent option - as opposed to - looking at 
IdentifyController:KAS to decide if KATO is supported, and if so, make 
the connection persistent ?

-- james

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-07 16:56     ` James Smart
@ 2019-05-08  6:10       ` Hannes Reinecke
  2019-05-08 19:31         ` James Smart
  2019-05-12 15:02         ` Sagi Grimberg
  2019-05-12 15:00       ` Sagi Grimberg
  1 sibling, 2 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-08  6:10 UTC (permalink / raw)


On 5/7/19 6:56 PM, James Smart wrote:
> On 5/7/2019 12:13 AM, Hannes Reinecke wrote:
>> On 5/4/19 2:04 AM, James Smart wrote:
>>> To support discovery (connect/connect-all) to operate against a
>>> persistent discovery controller, let the discovery controller to
>>> be specified by its device node name rather than new connection
>>> attributes.
>>>
>>> Also centralize extraction of controller instance from the controller
>>> name to a common helper.
>>>
>>> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
>>> Signed-off-by: James Smart <jsmart2021 at gmail.com>
>>> ---
>>> Patch originated from Sagi in an RFC
>>> fix merge issue
>>> fix contoller->controller
>>> add cfg.device check for "none"
>>> ---
>>> ? fabrics.c | 35 +++++++++++++++++++++++++----------
>>> ? 1 file changed, 25 insertions(+), 10 deletions(-)
>>>
>> Actually, I would rather scan existing controllers, and use the first 
>> one that matches the arguments.
>> With the current design we would have different calling sequences, 
>> depending on whether it's the first call (which create the 
>> controller), or any subsequent call.
>> If we were looking up the existing controller from the arguments we 
>> could be using the same commandline throughout.
>>
> 
> Well, this is where we're at odds.? We need to decide if we should 
> create another full controller instance when the event occurs, which is 
> what FC does today (can have multiple running concurrently), or whether 
> we want to reuse the persistent one. I would think implementers of the 
> persistent discovery controller would want you to reuse it and not have 
> to support more concurrent discovery controller instances.? The "reuse" 
> case is why passing the --device arg makes it very straight forward.? We 
> could always have the utility get the addressing info and look at 
> existing controller and decide to use an existing one - I assume we 
> would only select persistent ones in order to avoid delete race 
> conditions, but seems like a lot of work for what the AEN already 
> knows.? I do have a little concern that there's always a window from 
> when the event is posted to when the /dev devicename is opened, which
> allows for the device name to be terminated and possibly reused for 
> something else, but I guess we're willing to live with this.
> 
My concern here is that it'll be quite hard to use the --device argument
from within the udev event; the event itself doesn't specify the device,
making it really hard to use them in the first place.
Precisely due to the race issues you mentioned.

But now it becomes a be quirky; we need to have the persistent 
connection to get the AENs (ie we do have a device node), but we cannot 
use them from the uevent, and have to create a new one.

So this is my issue here: _if_ we have a persistent connection which 
generates AENs, we should be using the very same connection to get the 
discovery information in the connect-all call.

Which means we probably have to update the discovery AEN uevents to 
include a device name; then the uevent will carry the information, and 
the presence of the device in the uevent will tell us whether to use the 
persistent connection or not.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare at suse.de                              +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-08  6:10       ` Hannes Reinecke
@ 2019-05-08 19:31         ` James Smart
  2019-05-09  6:07           ` Hannes Reinecke
  2019-05-12 15:02         ` Sagi Grimberg
  1 sibling, 1 reply; 22+ messages in thread
From: James Smart @ 2019-05-08 19:31 UTC (permalink / raw)




On 5/7/2019 11:10 PM, Hannes Reinecke wrote:
> On 5/7/19 6:56 PM, James Smart wrote:
>>
>> Well, this is where we're at odds.? We need to decide if we should 
>> create another full controller instance when the event occurs, which 
>> is what FC does today (can have multiple running concurrently), or 
>> whether we want to reuse the persistent one. I would think 
>> implementers of the persistent discovery controller would want you to 
>> reuse it and not have to support more concurrent discovery controller 
>> instances.? The "reuse" case is why passing the --device arg makes it 
>> very straight forward.? We could always have the utility get the 
>> addressing info and look at existing controller and decide to use an 
>> existing one - I assume we would only select persistent ones in order 
>> to avoid delete race conditions, but seems like a lot of work for 
>> what the AEN already knows.? I do have a little concern that there's 
>> always a window from when the event is posted to when the /dev 
>> devicename is opened, which
>> allows for the device name to be terminated and possibly reused for 
>> something else, but I guess we're willing to live with this.
>>
> My concern here is that it'll be quite hard to use the --device argument
> from within the udev event; the event itself doesn't specify the device,
> making it really hard to use them in the first place.
> Precisely due to the race issues you mentioned.

uevent does specify the device (the --device argument). I don't know 
about hard - simply build up a string with "nvme" + ctrl instance # ? ? 
 ? example: nvme5

>
> But now it becomes a be quirky; we need to have the persistent 
> connection to get the AENs (ie we do have a device node), but we 
> cannot use them from the uevent, and have to create a new one.

?? not sure I follow. You seemed to say - we won't attempt to use the 
existing device and would create a new one.

>
> So this is my issue here: _if_ we have a persistent connection which 
> generates AENs, we should be using the very same connection to get the 
> discovery information in the connect-all call.
>
> Which means we probably have to update the discovery AEN uevents to 
> include a device name; then the uevent will carry the information, and 
> the presence of the device in the uevent will tell us whether to use 
> the persistent connection or not.
>

I thought that's what we said.? We're dealing with a persistent 
controller that was created at some time that eventually generates an 
AEN. And the current proposal is for the AEN to generate a udev event 
that will contain --device=nvme<instance#> (corresponding to the 
persistent controller) which will do a connect-all.? The cli, seeing the 
device argument will skip trying to create a controller and instead open 
the device and then resume normal connect-all scanning/processing.? We 
are in agreement on this ?

Which leaves:
a) how does the persistent controller get created in the first place:? 
which I answer with admin scripting, manual action, or transport event 
generation.
and
b) if performing a connect-all, and a discovery log was read with a 
discovery controller as a log entry, how should the cli know whether to 
create the controller as persistent or not:? which I proposed that it be 
one of the following: a) it doesn't, only manual actions/scripting would 
set the parameter; b) the new defaults file processing can figure out 
that it needs to add the argument; or c) we change the default action 
when connecting to a discovery controller to look at the KAS field to 
see if kato is supported, and if so, always enable the persistent option 
(in which case, no --persistent argument is necessary).

-- james

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-08 19:31         ` James Smart
@ 2019-05-09  6:07           ` Hannes Reinecke
  2019-05-12 15:08             ` Sagi Grimberg
  0 siblings, 1 reply; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-09  6:07 UTC (permalink / raw)


On 5/8/19 9:31 PM, James Smart wrote:
> 
> 
> On 5/7/2019 11:10 PM, Hannes Reinecke wrote:
>> On 5/7/19 6:56 PM, James Smart wrote:
>>>
>>> Well, this is where we're at odds.? We need to decide if we should 
>>> create another full controller instance when the event occurs, which 
>>> is what FC does today (can have multiple running concurrently), or 
>>> whether we want to reuse the persistent one. I would think 
>>> implementers of the persistent discovery controller would want you to 
>>> reuse it and not have to support more concurrent discovery controller 
>>> instances.? The "reuse" case is why passing the --device arg makes it 
>>> very straight forward.? We could always have the utility get the 
>>> addressing info and look at existing controller and decide to use an 
>>> existing one - I assume we would only select persistent ones in order 
>>> to avoid delete race conditions, but seems like a lot of work for 
>>> what the AEN already knows.? I do have a little concern that there's 
>>> always a window from when the event is posted to when the /dev 
>>> devicename is opened, which
>>> allows for the device name to be terminated and possibly reused for 
>>> something else, but I guess we're willing to live with this.
>>>
>> My concern here is that it'll be quite hard to use the --device argument
>> from within the udev event; the event itself doesn't specify the device,
>> making it really hard to use them in the first place.
>> Precisely due to the race issues you mentioned.
> 
> uevent does specify the device (the --device argument). I don't know 
> about hard - simply build up a string with "nvme" + ctrl instance #   
> example: nvme5
> 
Yes; I've done some updates to your patch series treating it as such.

>>
>> But now it becomes a be quirky; we need to have the persistent 
>> connection to get the AENs (ie we do have a device node), but we 
>> cannot use them from the uevent, and have to create a new one.
> 
> ?? not sure I follow. You seemed to say - we won't attempt to use the 
> existing device and would create a new one.
> 
Sort of. I got irritated by you concern above about a race window 
between uevent generated and the device node being opened.

>>
>> So this is my issue here: _if_ we have a persistent connection which 
>> generates AENs, we should be using the very same connection to get the 
>> discovery information in the connect-all call.
>>
>> Which means we probably have to update the discovery AEN uevents to 
>> include a device name; then the uevent will carry the information, and 
>> the presence of the device in the uevent will tell us whether to use 
>> the persistent connection or not.
>>
> 
> I thought that's what we said.? We're dealing with a persistent 
> controller that was created at some time that eventually generates an 
> AEN. And the current proposal is for the AEN to generate a udev event 
> that will contain --device=nvme<instance#> (corresponding to the 
> persistent controller) which will do a connect-all.? The cli, seeing the 
> device argument will skip trying to create a controller and instead open 
> the device and then resume normal connect-all scanning/processing.? We 
> are in agreement on this ?
> 
Yes.

> Which leaves:
> a) how does the persistent controller get created in the first place: 
> which I answer with admin scripting, manual action, or transport event 
> generation.

Hmm. Yes, we _could_ wrap it into the initial systemd service (which 
we'll have to do anyway to re-create all events / connections on bootup).

> and
> b) if performing a connect-all, and a discovery log was read with a 
> discovery controller as a log entry, how should the cli know whether to 
> create the controller as persistent or not:? which I proposed that it be 
> one of the following: a) it doesn't, only manual actions/scripting would 
> set the parameter; b) the new defaults file processing can figure out 
> that it needs to add the argument; or c) we change the default action 
> when connecting to a discovery controller to look at the KAS field to 
> see if kato is supported, and if so, always enable the persistent option 
> (in which case, no --persistent argument is necessary).
> 
You're talking about discovery referrals, right?
In this case I'd go with case a), and simply create non-persistent 
connection.
Reasoning is that if the admin wants to have a persistent discovery he 
should be ensuring that this secondary controller is found during inital 
discovery (ie with the abovementioned systemd boot service).

(And discovery referrals are tricky anyway; I've created two discovery 
controllers each having a reference to the other nice in order to have a 
failover scenario, but ended up with a looping nvme connect-all ... need 
to send a patch for that.)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts
  2019-05-07 20:26   ` James Smart
@ 2019-05-09 15:40     ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-09 15:40 UTC (permalink / raw)


On 5/7/19 10:26 PM, James Smart wrote:
> 
>> +# Events from persistent discovery controllers or nvme-fc transport 
>> events
>> +ACTION=="change", SUBSYSTEM=="nvme", ENV{NVME_EVENT}=="discovery",\
>> +? ENV{NVME_CTLR_NAME}=="*", ENV{NVME_TRTYPE}=="*", 
>> ENV{NVME_TRADDR}=="*", \
>> +? ENV{NVME_TRSVCID}=="*", ENV{NVME_HOST_TRADDR}=="*", \
>> +? RUN+="/usr/bin/systemctl --no-block start 
>> nvmf-connect at --device=$env{NVME_CTLR_NAME}\t--transport=$env{NVME_TRTYPE}\t--traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service" 
>>
>> +
>> +# nvme-fc transport generated events (old-style for compatibility)
>> +ACTION=="change", SUBSYSTEM=="fc", ENV{FC_EVENT}=="nvmediscovery", \
>> +? ENV{NVMEFC_HOST_TRADDR}=="*",? ENV{NVMEFC_TRADDR}=="*", \
>> +? RUN+="/usr/bin/systemctl --no-block start 
>> nvmf-connect at --device=none\t--transport=fc\t--traddr=$env{NVMEFC_TRADDR}\t--trsvcid=none\t--host-traddr=$env{NVMEFC_HOST_TRADDR}.service" 
>>
>>
> 
> I assume, that we should be adding --quiet to the above lines.? No need 
> to get it from the event.? Agree ?
> 
> And we would not add --persistent.?? Which makes me assume --persistent 
> would be left to manual/scripted calls or possibly pulled in by the se
> "defaults" file we were discussing.??? So, the linux policy is our 
> default discovery controller is non-persistent and requires admin 
> actions to enable the persistent option - as opposed to - looking at 
> IdentifyController:KAS to decide if KATO is supported, and if so, make 
> the connection persistent ?
> 
Didn't we discuss this in another thread?
My preferred workflow would be to create the persistent discovery 
connection during startup (by eg a system service or what have you),
and have the uevent using that connection.
Consequently, I would fill in a real device name with the --device 
option above.

Yes, this would be open to race conditions (ie someone might go ahead 
and re-create the device with a different connection before the event is 
processed), but you didn't like my argument to skip the --device call 
and have nvme-cli match on existing discovery connections ... and if we 
don't check for the connection in nvme-cli we'll always have that.

And it feels really stupid, to have a persistent discovery connection 
_just_ for AENs, and then have nvme-cli create _another_ connection when 
processing the event.
That really is a waste of resources.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare at suse.de                              +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-07 16:56     ` James Smart
  2019-05-08  6:10       ` Hannes Reinecke
@ 2019-05-12 15:00       ` Sagi Grimberg
  1 sibling, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2019-05-12 15:00 UTC (permalink / raw)



>> Actually, I would rather scan existing controllers, and use the first 
>> one that matches the arguments.
>> With the current design we would have different calling sequences, 
>> depending on whether it's the first call (which create the 
>> controller), or any subsequent call.
>> If we were looking up the existing controller from the arguments we 
>> could be using the same commandline throughout.
>>
> 
> Well, this is where we're at odds.? We need to decide if we should 
> create another full controller instance when the event occurs, which is 
> what FC does today (can have multiple running concurrently), or whether 
> we want to reuse the persistent one. I would think implementers of the 
> persistent discovery controller would want you to reuse it and not have 
> to support more concurrent discovery controller instances.? The "reuse" 
> case is why passing the --device arg makes it very straight forward.? We 
> could always have the utility get the addressing info and look at 
> existing controller and decide to use an existing one - I assume we 
> would only select persistent ones in order to avoid delete race 
> conditions, but seems like a lot of work for what the AEN already 
> knows.

It seems backwards to instantiate a new discovery controller when you
already have a usable one...

>? I do have a little concern that there's always a window from 
> when the event is posted to when the /dev devicename is opened, which 
> allows for the device name to be terminated and possibly reused for 
> something else, but I guess we're willing to live with this.

Lets start by defining the expected behavior in this case. I would
expect that it will execute the discovery if an the device node exists
and fail if it something else.

If the discovery device node went away - we should not make an effort
to connect according to the discovery log page by definition (if the
device node was deleted then discovery log change events are not
interesting).

So we could have discover validate traddr/trsvcid if --device was given
but still use the existing device node.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-08  6:10       ` Hannes Reinecke
  2019-05-08 19:31         ` James Smart
@ 2019-05-12 15:02         ` Sagi Grimberg
  1 sibling, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2019-05-12 15:02 UTC (permalink / raw)



> My concern here is that it'll be quite hard to use the --device argument
> from within the udev event; the event itself doesn't specify the device,
> making it really hard to use them in the first place.
> Precisely due to the race issues you mentioned.
> 
> But now it becomes a be quirky; we need to have the persistent 
> connection to get the AENs (ie we do have a device node), but we cannot 
> use them from the uevent, and have to create a new one.
> 
> So this is my issue here: _if_ we have a persistent connection which 
> generates AENs, we should be using the very same connection to get the 
> discovery information in the connect-all call.
> 
> Which means we probably have to update the discovery AEN uevents to 
> include a device name; then the uevent will carry the information, and 
> the presence of the device in the uevent will tell us whether to use the 
> persistent connection or not.

thats fine...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-09  6:07           ` Hannes Reinecke
@ 2019-05-12 15:08             ` Sagi Grimberg
  2019-05-13  6:15               ` Hannes Reinecke
  0 siblings, 1 reply; 22+ messages in thread
From: Sagi Grimberg @ 2019-05-12 15:08 UTC (permalink / raw)



> You're talking about discovery referrals, right?
> In this case I'd go with case a), and simply create non-persistent 
> connection.
> Reasoning is that if the admin wants to have a persistent discovery he 
> should be ensuring that this secondary controller is found during inital 
> discovery (ie with the abovementioned systemd boot service).
> 
> (And discovery referrals are tricky anyway; I've created two discovery 
> controllers each having a reference to the other nice in order to have a 
> failover scenario, but ended up with a looping nvme connect-all ... need 
> to send a patch for that.)

Hmm...

I see two options here:
1. create new transient discovery controller for each referral as we do
today.
2. make --persistent follow referrals such that we have persistent
device for each and then we skip referrals as we will expect events
coming from them...

I would suggest (1) as its simpler.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts
  2019-05-04  0:04 ` [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts James Smart
  2019-05-07  7:17   ` Hannes Reinecke
  2019-05-07 20:26   ` James Smart
@ 2019-05-12 15:15   ` Sagi Grimberg
  2 siblings, 0 replies; 22+ messages in thread
From: Sagi Grimberg @ 2019-05-12 15:15 UTC (permalink / raw)




On 5/3/19 5:04 PM, James Smart wrote:
> This set of scripts is a combination of those sent by Hannes, Sagi,
> and I in earlier patches and RFC's.
> 
> Auto-connect operates by the nvme core layer or nvme-fc transport
> generating a udev event with directory-controller addressing
> information. The nvme core layer generates an event when a
> persistent discovery controller generates a Discovery Log Change
> Notification AEN.  The nvme-fc transport generates an event when
> an FC rport that has a NVME Discovery controller is detected or
> when a FC state change event occurs for for an FC rport that has
> a NVME Discovery controller
> 
> The udev event is handled by a script that extracts the Discovery
> controller addressing information and initiates a systemd service
> to perform a "nvme connect-all" to the Discovery controller.
> The "nvme connect-all" request is not called directly from the udev
> handler itself as the request may take some time or stall altogether,
> which would block other udev event handling.  By transitioning to
> a sytemd service, the call can take as much time as needed to
> complete.
> 
> The scripts consist of:
> - A udev script that handles nvme core and nvme-fc udev events.
>    The udev handler starts a nvmf-connect systemd service.
> - A nvmf-connect systemd service. The service, in its instance
>    name, is passed the connect arguments for the discovery
>    controller. The service performs a "nvme connect-all" to the
>    discovery controller.
> - A nvmefc-boot-connections systemd service. This is a run-once
>    service run after udev is enabled, which will replay events
>    generated by NVME-FC devices detected during boot while udev
>    is not yet running.
> - To stop autoconnect an additional nvmefc-connect.target has
>    been added, which will instruct systemd to cancel all
>    outstanding autoconnect services.
> 
> Note: Although the nvme-fc subsystem is converting to use the
>    same nvme core layer event mechanism, the nvme-fc-specific
>    udev event that has been in existence for a while is contained
>    in in the script so that the utilities may run against older
>    kernels.
> 
> Signed-off-by: James Smart <jsmart2021 at gmail.com>
> CC: Simon Schricker <sschricker at suse.com>
> CC: Hannes Reinecke <hare at suse.com>
> CC: Sagi Grimberg <sagi at grimberg.me>
> ---
> I could use some recommendations on where to install the
> 70-nvmf-autoconnect.conf and nvmf-connect.target files
> ---
>   Makefile                                         | 10 +++++++++-
>   nvme.spec.in                                     |  8 ++++++++
>   nvmf-autoconnect/70-nvmf-autoconnect.conf        |  1 +
>   nvmf-autoconnect/70-nvmf-autoconnect.rules       | 18 ++++++++++++++++++
>   nvmf-autoconnect/nvmefc-boot-connections.service |  9 +++++++++
>   nvmf-autoconnect/nvmf-connect.target             |  2 ++
>   nvmf-autoconnect/nvmf-connect at .service           | 14 ++++++++++++++
>   7 files changed, 61 insertions(+), 1 deletion(-)
>   create mode 100644 nvmf-autoconnect/70-nvmf-autoconnect.conf
>   create mode 100644 nvmf-autoconnect/70-nvmf-autoconnect.rules
>   create mode 100644 nvmf-autoconnect/nvmefc-boot-connections.service
>   create mode 100644 nvmf-autoconnect/nvmf-connect.target
>   create mode 100644 nvmf-autoconnect/nvmf-connect at .service
> 
> diff --git a/Makefile b/Makefile
> index 4bfbebb..a86751e 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -8,6 +8,8 @@ DESTDIR =
>   PREFIX ?= /usr/local
>   SYSCONFDIR = /etc
>   SBINDIR = $(PREFIX)/sbin
> +SYSTEMDDIR = /usr/lib/systemd
> +UDEVDIR = /usr/lib/udev

Lets do instead:
+LIBDIR ?= $(PREFIX)/lib
+SYSTEMDDIR ?= $(LIBDIR)/systemd
+UDEVDIR ?= $(LIBDIR)/udev

>   LIB_DEPENDS =
>   
>   ifeq ($(LIBUUID),0)
> @@ -86,6 +88,12 @@ install-bash-completion:
>   	$(INSTALL) -d $(DESTDIR)$(PREFIX)/share/bash-completion/completions
>   	$(INSTALL) -m 644 -T ./completions/bash-nvme-completion.sh $(DESTDIR)$(PREFIX)/share/bash-completion/completions/nvme
>   
> +install-udev:
> +	$(INSTALL) -d $(DESTDIR)$(SYSTEMDDIR)/system
> +	$(INSTALL) -m 644 ./nvmf-autoconnect/*.service $(DESTDIR)$(SYSTEMDDIR)/system
> +	$(INSTALL) -d $(DESTDIR)$(UDEVDIR)/rules.d
> +	$(INSTALL) -m 644 ./nvmf-autoconnect/*.rules $(DESTDIR)$(UDEVDIR)/rules.d
> +
>   install-zsh-completion:
>   	$(INSTALL) -d $(DESTDIR)$(PREFIX)/share/zsh/site-functions
>   	$(INSTALL) -m 644 -T ./completions/_nvme $(DESTDIR)$(PREFIX)/share/zsh/site-functions/_nvme
> @@ -106,7 +114,7 @@ install-etc:
>   		$(INSTALL) -m 644 -T ./etc/discovery.conf.in $(DESTDIR)$(SYSCONFDIR)/nvme/discovery.conf; \
>   	fi
>   
> -install-spec: install-bin install-man install-bash-completion install-zsh-completion install-etc
> +install-spec: install-bin install-man install-bash-completion install-zsh-completion install-etc install-udev
>   install: install-spec install-hostparams
>   
>   nvme.spec: nvme.spec.in NVME-VERSION-FILE
> diff --git a/nvme.spec.in b/nvme.spec.in
> index 6934f8f..40a5eff 100644
> --- a/nvme.spec.in
> +++ b/nvme.spec.in
> @@ -35,6 +35,9 @@ make install-spec DESTDIR=%{buildroot} PREFIX=/usr
>   %{_sysconfdir}/nvme/hostnqn
>   %{_sysconfdir}/nvme/hostid
>   %{_sysconfdir}/nvme/discovery.conf
> +/usr/lib/udev/rules.d/70-nvmf-autoconnect.rules
> +/usr/lib/systemd/system/nvmf-connect at .service
> +/usr/lib/systemd/system/nvmefc-boot-connections.service

How about making the prefix ${_libdir} and pass make intall 
LIBDIR=%{_libdir} argument as well?

>   
>   %clean
>   rm -rf $RPM_BUILD_ROOT
> @@ -47,6 +50,11 @@ if [ $1 -eq 1 ]; then # 1 : This package is being installed for the first time
>           if [ ! -s %{_sysconfdir}/nvme/hostid ]; then
>                   uuidgen > %{_sysconfdir}/nvme/hostid
>           fi
> +
> +	# apply udev and systemd changes that we did
> +	udevadm control --reload-rules && udevadm trigger
> +	systemctl daemon-reload
> + fi
>   fi
>   
>   %changelog
> diff --git a/nvmf-autoconnect/70-nvmf-autoconnect.conf b/nvmf-autoconnect/70-nvmf-autoconnect.conf
> new file mode 100644
> index 0000000..844f3d9
> --- /dev/null
> +++ b/nvmf-autoconnect/70-nvmf-autoconnect.conf
> @@ -0,0 +1 @@
> +install_items+="/usr/lib/udev/rules.d/70-nvmf-autoconnect.rules"
> diff --git a/nvmf-autoconnect/70-nvmf-autoconnect.rules b/nvmf-autoconnect/70-nvmf-autoconnect.rules
> new file mode 100644
> index 0000000..5405314
> --- /dev/null
> +++ b/nvmf-autoconnect/70-nvmf-autoconnect.rules
> @@ -0,0 +1,18 @@
> +#
> +# nvmf-autoconnect.rules:
> +#   Handles udev events which invoke automatically scan via discovery
> +#   controller and connect to elements in the discovery log.
> +#
> +#
> +
> +# Events from persistent discovery controllers or nvme-fc transport events
> +ACTION=="change", SUBSYSTEM=="nvme", ENV{NVME_EVENT}=="discovery",\
> +  ENV{NVME_CTLR_NAME}=="*", ENV{NVME_TRTYPE}=="*", ENV{NVME_TRADDR}=="*", \
> +  ENV{NVME_TRSVCID}=="*", ENV{NVME_HOST_TRADDR}=="*", \
> +  RUN+="/usr/bin/systemctl --no-block start nvmf-connect at --device=$env{NVME_CTLR_NAME}\t--transport=$env{NVME_TRTYPE}\t--traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service"

with --quite?

> +
> +# nvme-fc transport generated events (old-style for compatibility)
> +ACTION=="change", SUBSYSTEM=="fc", ENV{FC_EVENT}=="nvmediscovery", \
> +  ENV{NVMEFC_HOST_TRADDR}=="*",  ENV{NVMEFC_TRADDR}=="*", \
> +  RUN+="/usr/bin/systemctl --no-block start nvmf-connect at --device=none\t--transport=fc\t--traddr=$env{NVMEFC_TRADDR}\t--trsvcid=none\t--host-traddr=$env{NVMEFC_HOST_TRADDR}.service"
> +
> diff --git a/nvmf-autoconnect/nvmefc-boot-connections.service b/nvmf-autoconnect/nvmefc-boot-connections.service
> new file mode 100644
> index 0000000..84f6486
> --- /dev/null
> +++ b/nvmf-autoconnect/nvmefc-boot-connections.service
> @@ -0,0 +1,9 @@
> +[Unit]
> +Description=Auto-connect to subsystems on FC-NVME devices found during boot
> +
> +[Service]
> +Type=oneshot
> +ExecStart=/bin/sh -c "echo add > /sys/class/fc/fc_udev_device/nvme_discovery"
> +
> +[Install]
> +WantedBy=default.target
> diff --git a/nvmf-autoconnect/nvmf-connect.target b/nvmf-autoconnect/nvmf-connect.target
> new file mode 100644
> index 0000000..f64a37c
> --- /dev/null
> +++ b/nvmf-autoconnect/nvmf-connect.target
> @@ -0,0 +1,2 @@
> +[Unit]
> +Description=All instances of nvmf-autoconnect daemon
> diff --git a/nvmf-autoconnect/nvmf-connect at .service b/nvmf-autoconnect/nvmf-connect at .service
> new file mode 100644
> index 0000000..7d9f856
> --- /dev/null
> +++ b/nvmf-autoconnect/nvmf-connect at .service
> @@ -0,0 +1,14 @@
> +#
> +# Unit file used by 70-nvmf-autoconnect.rules.
> +#
> +
> +[Unit]
> +Description=NVMf auto-connect scan upon nvme discovery controller Events
> +After=syslog.target
> +PartOf=nvmf-connect.target
> +Requires=nvmf-connect.target
> +
> +[Service]
> +Type=simple
> +Environment="CONNECT_ARGS=%i"
> +ExecStart=/bin/sh -c "/usr/sbin/nvme connect-all `/usr/bin/echo -e $CONNECT_ARGS`"
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name
  2019-05-12 15:08             ` Sagi Grimberg
@ 2019-05-13  6:15               ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2019-05-13  6:15 UTC (permalink / raw)


On 5/12/19 5:08 PM, Sagi Grimberg wrote:
> 
>> You're talking about discovery referrals, right?
>> In this case I'd go with case a), and simply create non-persistent 
>> connection.
>> Reasoning is that if the admin wants to have a persistent discovery he 
>> should be ensuring that this secondary controller is found during 
>> inital discovery (ie with the abovementioned systemd boot service).
>>
>> (And discovery referrals are tricky anyway; I've created two discovery 
>> controllers each having a reference to the other nice in order to have 
>> a failover scenario, but ended up with a looping nvme connect-all ... 
>> need to send a patch for that.)
> 
> Hmm...
> 
> I see two options here:
> 1. create new transient discovery controller for each referral as we do
> today.
> 2. make --persistent follow referrals such that we have persistent
> device for each and then we skip referrals as we will expect events
> coming from them...
> 
> I would suggest (1) as its simpler.

Same here.
A persistent connection is pretty much an admin choice, and we cannot 
and should not out-guess the admin here.
If the admin wants to use persistent connections he should be creating one.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2019-05-13  6:15 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-04  0:04 [PATCH nvme-cli rfc 0/5] nvmf auto-connect scripts James Smart
2019-05-04  0:04 ` [PATCH nvme-cli rfc 1/5] fabrics: ignore arguments that pass in "none" James Smart
2019-05-07  7:08   ` Hannes Reinecke
2019-05-04  0:04 ` [PATCH nvme-cli rfc 2/5] fabrics: support persistent connections to a discovery controller James Smart
2019-05-07  7:11   ` Hannes Reinecke
2019-05-04  0:04 ` [PATCH nvme-cli rfc 3/5] fabrics: allow discover to address discovery controller by persistent name James Smart
2019-05-07  7:13   ` Hannes Reinecke
2019-05-07 16:56     ` James Smart
2019-05-08  6:10       ` Hannes Reinecke
2019-05-08 19:31         ` James Smart
2019-05-09  6:07           ` Hannes Reinecke
2019-05-12 15:08             ` Sagi Grimberg
2019-05-13  6:15               ` Hannes Reinecke
2019-05-12 15:02         ` Sagi Grimberg
2019-05-12 15:00       ` Sagi Grimberg
2019-05-04  0:04 ` [PATCH nvme-cli rfc 4/5] fabrics: add --quiet option James Smart
2019-05-07  7:14   ` Hannes Reinecke
2019-05-04  0:04 ` [PATCH nvme-cli rfc 5/5] nvmf auto-connect scripts James Smart
2019-05-07  7:17   ` Hannes Reinecke
2019-05-07 20:26   ` James Smart
2019-05-09 15:40     ` Hannes Reinecke
2019-05-12 15:15   ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.