All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH nvme-cli rfc 0/6] Support discovery log change events
@ 2019-02-23  2:32 Sagi Grimberg
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none" Sagi Grimberg
                   ` (6 more replies)
  0 siblings, 7 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


This series corresponds to the nvme-core discovery uevent notification
for discovery log changes.

The motivation is that whenever we get a discovery log change event from
a discovery controller, we want to query the discovery log page again
and connect to the new controllers in the fabric.

To support this we have a couple of steps:
1. establish a "persistent" discovery controller
2. allow discovery over an existing "persisent" discovery controller device
   node
3. allow users to pass "none" to nvme-cli for arguments that they don't
   care about (this will be useful when we have udev rule that catches events
   and simply feed them to nvme-cli without checking any parameters or building
   a commandline arg string).
4. add udev+systemd scripting to make this all automatic.

Review and feedback is welcome.

Hannes Reinecke (1):
  fc: add autoconnect systemd service and udev rule for fc discovery log
    changes

Sagi Grimberg (5):
  fabrics: ignore arguments that pass in "none"
  fabrics: support persistent connections to a discovery controller
  fabrics: allow user to retrieve discovery log from existing discovery
    controller
  fabrics: add --quiet option
  fabrics: systemd and udev support for automatic discovery log changes

 Makefile                                | 10 +++-
 fabrics.c                               | 62 ++++++++++++++++++-------
 fabrics.h                               |  2 +
 nvme.spec.in                            |  7 +++
 systemd/nvmefc-boot-connections.service |  9 ++++
 systemd/nvmf-connect at .service           | 15 ++++++
 udev/70-nvmf-autoconnect.rules          | 11 +++++
 7 files changed, 98 insertions(+), 18 deletions(-)
 create mode 100644 systemd/nvmefc-boot-connections.service
 create mode 100644 systemd/nvmf-connect at .service
 create mode 100644 udev/70-nvmf-autoconnect.rules

-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none"
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
@ 2019-02-23  2:32 ` Sagi Grimberg
  2019-05-07  9:17   ` Max Gurtovoy
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 2/6] fabrics: support persistent connections to a discovery controller Sagi Grimberg
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


As we want to support discovery uevents over different
transports, we want to allow the kernel to provide missing
information in the form of none and have nvme-cli properly
ignore it.

One example is the host_traddr. If it is not set (which means
that the default route determined the host address) we will
want to do the same for newly discovered controllers.

So allow users to pass 'none' arguments as well.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 fabrics.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index f5cd2127db05..ad9a15e72070 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -565,7 +565,7 @@ add_argument(char **argstr, int *max_len, char *arg_str, char *arg)
 {
 	int len;
 
-	if (arg) {
+	if (arg && strcmp(arg, "none")) {
 		len = snprintf(*argstr, *max_len, ",%s=%s", arg_str, arg);
 		if (len < 0)
 			return -EINVAL;
@@ -658,14 +658,14 @@ retry:
 		return -EINVAL;
 	p += len;
 
-	if (cfg.hostnqn) {
+	if (cfg.hostnqn && strcmp(cfg.hostnqn, "none")) {
 		len = sprintf(p, ",hostnqn=%s", cfg.hostnqn);
 		if (len < 0)
 			return -EINVAL;
 		p += len;
 	}
 
-	if (cfg.hostid) {
+	if (cfg.hostid && strcmp(cfg.hostnqn, "none")) {
 		len = sprintf(p, ",hostid=%s", cfg.hostid);
 		if (len < 0)
 			return -EINVAL;
@@ -700,7 +700,7 @@ retry:
 		p += len;
 	}
 
-	if (cfg.host_traddr) {
+	if (cfg.host_traddr && strcmp(cfg.host_traddr, "none")) {
 		len = sprintf(p, ",host_traddr=%s", cfg.host_traddr);
 		if (len < 0)
 			return -EINVAL;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 2/6] fabrics: support persistent connections to a discovery controller
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none" Sagi Grimberg
@ 2019-02-23  2:32 ` Sagi Grimberg
  2019-05-07  9:22   ` Max Gurtovoy
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 3/6] fabrics: allow user to retrieve discovery log from existing " Sagi Grimberg
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


Simply don't destroy the discovery controller after getting the
log pages. Note that persistent connection to a discovery subsystem
require to pass in a non-zero kato value, so if not provided we
simply use a default of 30 seconds kato.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 fabrics.c | 10 +++++++++-
 fabrics.h |  2 ++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/fabrics.c b/fabrics.c
index ad9a15e72070..b9c5378b8034 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -65,6 +65,7 @@ static struct config {
 	int  disable_sqflow;
 	int  hdr_digest;
 	int  data_digest;
+	bool persistent;
 } cfg = { NULL };
 
 #define BUF_SIZE		4096
@@ -855,7 +856,8 @@ static int do_discover(char *argstr, bool connect)
 		return errno;
 	ret = nvmf_get_log_page_discovery(dev_name, &log, &numrec);
 	free(dev_name);
-	remove_ctrl(instance);
+	if (!cfg.persistent)
+		remove_ctrl(instance);
 
 	switch (ret) {
 	case DISC_OK:
@@ -931,6 +933,9 @@ static int discover_from_conf_file(const char *desc, char *argstr,
 		if (err)
 			continue;
 
+		if (cfg.persistent && !cfg.keep_alive_tmo)
+			cfg.keep_alive_tmo = NVMF_DEF_DISC_TMO;
+
 		err = build_options(argstr, BUF_SIZE);
 		if (err) {
 			ret = err;
@@ -973,6 +978,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		{"nr-write-queues", 'W', "LIST", CFG_INT, &cfg.nr_write_queues,    required_argument, "number of write queues to use (default 0)" },
 		{"nr-poll-queues",  'P', "LIST", CFG_INT, &cfg.nr_poll_queues,    required_argument, "number of poll queues to use (default 0)" },
 		{"queue-size",      'Q', "LIST", CFG_INT, &cfg.queue_size,      required_argument, "number of io queue elements to use (default 128)" },
+		{"persistent",  'p', "LIST", CFG_NONE, &cfg.persistent,  no_argument, "persistent discovery connection" },
 		{NULL},
 	};
 
@@ -987,6 +993,8 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		return discover_from_conf_file(desc, argstr,
 				command_line_options, connect);
 	} else {
+		if (cfg.persistent && !cfg.keep_alive_tmo)
+			cfg.keep_alive_tmo = NVMF_DEF_DISC_TMO;
 		ret = build_options(argstr, BUF_SIZE);
 		if (ret)
 			return ret;
diff --git a/fabrics.h b/fabrics.h
index 988f3ef2fbc4..7c1664b80d51 100644
--- a/fabrics.h
+++ b/fabrics.h
@@ -1,6 +1,8 @@
 #ifndef _DISCOVER_H
 #define _DISCOVER_H
 
+#define NVMF_DEF_DISC_TMO	30
+
 extern int discover(const char *desc, int argc, char **argv, bool connect);
 extern int connect(const char *desc, int argc, char **argv);
 extern int disconnect(const char *desc, int argc, char **argv);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 3/6] fabrics: allow user to retrieve discovery log from existing discovery controller
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none" Sagi Grimberg
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 2/6] fabrics: support persistent connections to a discovery controller Sagi Grimberg
@ 2019-02-23  2:32 ` Sagi Grimberg
  2019-05-07  9:27   ` Max Gurtovoy
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option Sagi Grimberg
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


Simply allow discover to receive the discovery device node name.
Also centralize extraction of controller instance from the controller
name to a common helper.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 fabrics.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index b9c5378b8034..702f928cd8ed 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -189,6 +189,19 @@ static const char *cms_str(__u8 cm)
 
 static int do_discover(char *argstr, bool connect);
 
+static int ctrl_instance(char *device)
+{
+	int ret, instance;
+
+	device = basename(device);
+	ret = sscanf(device, "nvme%d", &instance);
+	if (ret < 0)
+		return ret;
+	if (!ret)
+		return -1;
+	return instance;
+}
+
 static int add_ctrl(const char *argstr)
 {
 	substring_t args[MAX_OPT_ARGS];
@@ -848,7 +861,10 @@ static int do_discover(char *argstr, bool connect)
 	char *dev_name;
 	int instance, numrec = 0, ret;
 
-	instance = add_ctrl(argstr);
+	if (!cfg.device)
+		instance = add_ctrl(argstr);
+	else
+		instance = ctrl_instance(cfg.device);
 	if (instance < 0)
 		return instance;
 
@@ -856,7 +872,7 @@ static int do_discover(char *argstr, bool connect)
 		return errno;
 	ret = nvmf_get_log_page_discovery(dev_name, &log, &numrec);
 	free(dev_name);
-	if (!cfg.persistent)
+	if (!cfg.device && !cfg.persistent)
 		remove_ctrl(instance);
 
 	switch (ret) {
@@ -969,6 +985,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		{"hostnqn",     'q', "LIST", CFG_STRING, &cfg.hostnqn,     required_argument, "user-defined hostnqn (if default not used)" },
 		{"hostid",      'I', "LIST", CFG_STRING, &cfg.hostid,      required_argument, "user-defined hostid (if default not used)"},
 		{"raw",         'r', "LIST", CFG_STRING, &cfg.raw,         required_argument, "raw output file" },
+		{"device",      'd', "LIST", CFG_STRING, &cfg.device, required_argument, "use existing discovery contoller device" },
 		{"keep-alive-tmo",  'k', "LIST", CFG_INT, &cfg.keep_alive_tmo,  required_argument, "keep alive timeout period in seconds" },
 		{"reconnect-delay", 'c', "LIST", CFG_INT, &cfg.reconnect_delay, required_argument, "reconnect timeout period in seconds" },
 		{"ctrl-loss-tmo",   'l', "LIST", CFG_INT, &cfg.ctrl_loss_tmo,   required_argument, "controller loss timeout period in seconds" },
@@ -1124,15 +1141,10 @@ static int disconnect_by_nqn(char *nqn)
 static int disconnect_by_device(char *device)
 {
 	int instance;
-	int ret;
-
-	device = basename(device);
-	ret = sscanf(device, "nvme%d", &instance);
-	if (ret < 0)
-		return ret;
-	if (!ret)
-		return -1;
 
+	instance = ctrl_instance(device);
+	if (instance < 0)
+		return instance;
 	return remove_ctrl(instance);
 }
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
                   ` (2 preceding siblings ...)
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 3/6] fabrics: allow user to retrieve discovery log from existing " Sagi Grimberg
@ 2019-02-23  2:32 ` Sagi Grimberg
  2019-05-07  9:35   ` Max Gurtovoy
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 5/6] fabrics: systemd and udev support for automatic discovery log changes Sagi Grimberg
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


Now we are going to have discovery log change events, so
instead of trying to figure out which NVM subsystem ports
are already connected, we let the driver do that and
only suppress the failure messages.

This option will be used by the discovery log change corresponding
udev rule.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 fabrics.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/fabrics.c b/fabrics.c
index 702f928cd8ed..f08fe3f9a0be 100644
--- a/fabrics.c
+++ b/fabrics.c
@@ -66,6 +66,7 @@ static struct config {
 	int  hdr_digest;
 	int  data_digest;
 	bool persistent;
+	bool quiet;
 } cfg = { NULL };
 
 #define BUF_SIZE		4096
@@ -216,9 +217,11 @@ static int add_ctrl(const char *argstr)
 		goto out;
 	}
 
-	if (write(fd, argstr, len) != len) {
-		fprintf(stderr, "Failed to write to %s: %s\n",
-			 PATH_NVME_FABRICS, strerror(errno));
+	ret = write(fd, argstr, len);
+	if (ret != len) {
+		if (errno != EALREADY || !cfg.quiet)
+			fprintf(stderr, "Failed to write to %s: %s\n",
+				 PATH_NVME_FABRICS, strerror(errno));
 		ret = -errno;
 		goto out_close;
 	}
@@ -840,6 +843,10 @@ static int connect_ctrls(struct nvmf_disc_rsp_page_hdr *log, int numrec)
 		/* already connected print message	*/
 		if (instance == -EALREADY) {
 			const char *traddr = log->entries[i].traddr;
+
+			if (cfg.quiet)
+				continue;
+
 			fprintf(stderr,
 				"traddr=%.*s is already connected\n",
 				space_strip_len(NVMF_TRADDR_SIZE, traddr),
@@ -996,6 +1003,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
 		{"nr-poll-queues",  'P', "LIST", CFG_INT, &cfg.nr_poll_queues,    required_argument, "number of poll queues to use (default 0)" },
 		{"queue-size",      'Q', "LIST", CFG_INT, &cfg.queue_size,      required_argument, "number of io queue elements to use (default 128)" },
 		{"persistent",  'p', "LIST", CFG_NONE, &cfg.persistent,  no_argument, "persistent discovery connection" },
+		{"quiet",       'Q', "LIST", CFG_NONE, &cfg.quiet,  no_argument, "suppress already connected errors" },
 		{NULL},
 	};
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 5/6] fabrics: systemd and udev support for automatic discovery log changes
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
                   ` (3 preceding siblings ...)
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option Sagi Grimberg
@ 2019-02-23  2:32 ` Sagi Grimberg
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 6/6] fc: add autoconnect systemd service and udev rule for fc " Sagi Grimberg
  2019-02-24 17:17 ` [PATCH nvme-cli rfc 0/6] Support discovery log change events Hannes Reinecke
  6 siblings, 0 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


Now that a fabrics controller can report discovery log change events,
we add a udev rule to act on a discovery log change event by running
nvme connect-all to an already established discovery controller instance.

The kernel will generate a udev event with parameters:
NVME_EVENT: currently only discovery events.
NVME_TRTYPE: the discovery controller transport type.
NVME_INSTANCE: the discovery controller instance id.
NVME_TRADDR: the discovery controller transport address.
NVME_TRSVCID: the discovery controller transport service-id.
NVME_HOST_TRADDR: the discovery controller host transport address.

udev simply feeds them as-is to nvme connect-all.

Install scripts as part of nvme-cli and apply them in post installation.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 Makefile                       | 10 +++++++++-
 nvme.spec.in                   |  6 ++++++
 systemd/nvmf-connect at .service  | 15 +++++++++++++++
 udev/70-nvmf-autoconnect.rules |  7 +++++++
 4 files changed, 37 insertions(+), 1 deletion(-)
 create mode 100644 systemd/nvmf-connect at .service
 create mode 100644 udev/70-nvmf-autoconnect.rules

diff --git a/Makefile b/Makefile
index 303f13f256c0..86ff968691a6 100644
--- a/Makefile
+++ b/Makefile
@@ -8,6 +8,8 @@ DESTDIR =
 PREFIX ?= /usr/local
 SYSCONFDIR = /etc
 SBINDIR = $(PREFIX)/sbin
+SYSTEMDDIR = /usr/lib/systemd
+UDEVDIR = /lib/udev
 LIB_DEPENDS =
 
 ifeq ($(LIBUUID),0)
@@ -75,6 +77,12 @@ clean:
 clobber: clean
 	$(MAKE) -C Documentation clobber
 
+install-udev:
+	$(INSTALL) -d $(DESTDIR)$(SYSTEMDDIR)/system
+	$(INSTALL) -m 644 ./systemd/*.service $(DESTDIR)$(SYSTEMDDIR)/system
+	$(INSTALL) -d $(DESTDIR)$(UDEVDIR)/rules.d
+	$(INSTALL) -m 644 ./udev/*.rules $(DESTDIR)$(UDEVDIR)/rules.d
+
 install-man:
 	$(MAKE) -C Documentation install-no-build
 
@@ -106,7 +114,7 @@ install-etc:
 		$(INSTALL) -m 644 -T ./etc/discovery.conf.in $(DESTDIR)$(SYSCONFDIR)/nvme/discovery.conf; \
 	fi
 
-install-spec: install-bin install-man install-bash-completion install-zsh-completion install-etc
+install-spec: install-bin install-man install-bash-completion install-zsh-completion install-etc install-udev
 install: install-spec install-hostparams
 
 nvme.spec: nvme.spec.in NVME-VERSION-FILE
diff --git a/nvme.spec.in b/nvme.spec.in
index 6934f8fd605b..7b5b664e63d9 100644
--- a/nvme.spec.in
+++ b/nvme.spec.in
@@ -35,6 +35,8 @@ make install-spec DESTDIR=%{buildroot} PREFIX=/usr
 %{_sysconfdir}/nvme/hostnqn
 %{_sysconfdir}/nvme/hostid
 %{_sysconfdir}/nvme/discovery.conf
+/usr/lib/udev/rules.d/70-nvmf-autoconnect.rules
+/usr/lib/systemd/system/nvmf-connect at .service
 
 %clean
 rm -rf $RPM_BUILD_ROOT
@@ -47,6 +49,10 @@ if [ $1 -eq 1 ]; then # 1 : This package is being installed for the first time
         if [ ! -s %{_sysconfdir}/nvme/hostid ]; then
                 uuidgen > %{_sysconfdir}/nvme/hostid
         fi
+
+	# apply udev and systemd changes that we did
+	udevadm control --reload-rules && udevadm trigger
+	systemctl daemon-reload
 fi
 
 %changelog
diff --git a/systemd/nvmf-connect at .service b/systemd/nvmf-connect at .service
new file mode 100644
index 000000000000..694b27822f61
--- /dev/null
+++ b/systemd/nvmf-connect at .service
@@ -0,0 +1,15 @@
+#
+# Unit file used by 70-nvmf-autoconnect.rules.
+#
+
+[Unit]
+Description=NVMf automatic scan upon discovery log change events
+After=syslog.target
+
+[Service]
+Type=oneshot
+Environment="CONNECT_ARGS=%i"
+ExecStart=/bin/sh -c "nvme connect-all `/bin/echo -e $CONNECT_ARGS`"
+
+[Install]
+WantedBy=default.target
diff --git a/udev/70-nvmf-autoconnect.rules b/udev/70-nvmf-autoconnect.rules
new file mode 100644
index 000000000000..a149036c3833
--- /dev/null
+++ b/udev/70-nvmf-autoconnect.rules
@@ -0,0 +1,7 @@
+#
+# nvmf: udev event to automatically scan (via discovery controller)
+#   new nvme subsystem ports and auto-connect to the subsystems they report.
+#
+
+SUBSYSTEM=="nvme", ACTION=="change", ENV{NVME_EVENT}=="discovery",\
+  RUN+="/bin/systemctl --no-block start nvmf-connect at --device=nvme$env{NVME_INSTANCE}\t--transport=$env{NVME_TRTYPE}\t--traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service"
-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 6/6] fc: add autoconnect systemd service and udev rule for fc discovery log changes
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
                   ` (4 preceding siblings ...)
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 5/6] fabrics: systemd and udev support for automatic discovery log changes Sagi Grimberg
@ 2019-02-23  2:32 ` Sagi Grimberg
  2019-02-24 17:17 ` [PATCH nvme-cli rfc 0/6] Support discovery log change events Hannes Reinecke
  6 siblings, 0 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-23  2:32 UTC (permalink / raw)


From: Hannes Reinecke <hare@suse.de>

Whenever a new FC remote port with a discovery subsytem is detected.
Whenever such a port is detected (by registration of the remoteport
with the nvme_fc transport by the lldd), the transport generates
a udev event.  A udev rule is created that parses the event information
into the FC remote port (traddr) that supports a discovery subsystem and
the FC host port that is connected to the FC remote port (host_traddr).

The rule then invokes an instance-based systemd service to perform a
"nvme connect-all" with the traddr and host-traddr info.

Signed-off-by: Hannes Reinecke <hare at suse.de>
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 nvme.spec.in                            | 1 +
 systemd/nvmefc-boot-connections.service | 9 +++++++++
 udev/70-nvmf-autoconnect.rules          | 4 ++++
 3 files changed, 14 insertions(+)
 create mode 100644 systemd/nvmefc-boot-connections.service

diff --git a/nvme.spec.in b/nvme.spec.in
index 7b5b664e63d9..68cfcb626982 100644
--- a/nvme.spec.in
+++ b/nvme.spec.in
@@ -37,6 +37,7 @@ make install-spec DESTDIR=%{buildroot} PREFIX=/usr
 %{_sysconfdir}/nvme/discovery.conf
 /usr/lib/udev/rules.d/70-nvmf-autoconnect.rules
 /usr/lib/systemd/system/nvmf-connect at .service
+/usr/lib/systemd/system/nvmefc-boot-connections.service
 
 %clean
 rm -rf $RPM_BUILD_ROOT
diff --git a/systemd/nvmefc-boot-connections.service b/systemd/nvmefc-boot-connections.service
new file mode 100644
index 000000000000..aa35c1519f84
--- /dev/null
+++ b/systemd/nvmefc-boot-connections.service
@@ -0,0 +1,9 @@
+[Unit]
+Description=Auto-connect to subsystems on FC-NVME devices during boot
+
+[Service]
+Type=oneshot
+ExecStart=/bin/sh -c "echo add > /sys/class/fc/fc_udev_device/nvme_discovery"
+
+[Install]
+WantedBy=default.target
diff --git a/udev/70-nvmf-autoconnect.rules b/udev/70-nvmf-autoconnect.rules
index a149036c3833..0d8b5e5edf2a 100644
--- a/udev/70-nvmf-autoconnect.rules
+++ b/udev/70-nvmf-autoconnect.rules
@@ -5,3 +5,7 @@
 
 SUBSYSTEM=="nvme", ACTION=="change", ENV{NVME_EVENT}=="discovery",\
   RUN+="/bin/systemctl --no-block start nvmf-connect at --device=nvme$env{NVME_INSTANCE}\t--transport=$env{NVME_TRTYPE}\t--traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service"
+
+ACTION=="change", SUBSYSTEM=="fc", ENV{FC_EVENT}=="nvmediscovery", \
+  ENV{NVMEFC_HOST_TRADDR}=="*",  ENV{NVMEFC_TRADDR}=="*", \
+  RUN+="/bin/systemctl --no-block start nvmf-connect@'--transport=fc\t--host-traddr=$env{NVMEFC_HOST_TRADDR}\t--traddr=$env{NVMEFC_TRADDR}'.service"
-- 
2.17.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
                   ` (5 preceding siblings ...)
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 6/6] fc: add autoconnect systemd service and udev rule for fc " Sagi Grimberg
@ 2019-02-24 17:17 ` Hannes Reinecke
  2019-02-24 22:33   ` Sagi Grimberg
  6 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2019-02-24 17:17 UTC (permalink / raw)


On 2/23/19 3:32 AM, Sagi Grimberg wrote:
> This series corresponds to the nvme-core discovery uevent notification
> for discovery log changes.
> 
> The motivation is that whenever we get a discovery log change event from
> a discovery controller, we want to query the discovery log page again
> and connect to the new controllers in the fabric.
> 
> To support this we have a couple of steps:
> 1. establish a "persistent" discovery controller
> 2. allow discovery over an existing "persisent" discovery controller device
>     node
> 3. allow users to pass "none" to nvme-cli for arguments that they don't
>     care about (this will be useful when we have udev rule that catches events
>     and simply feed them to nvme-cli without checking any parameters or building
>     a commandline arg string).
> 4. add udev+systemd scripting to make this all automatic.
> 
> Review and feedback is welcome.
> 
> Hannes Reinecke (1):
>    fc: add autoconnect systemd service and udev rule for fc discovery log
>      changes
> 
> Sagi Grimberg (5):
>    fabrics: ignore arguments that pass in "none"
>    fabrics: support persistent connections to a discovery controller
>    fabrics: allow user to retrieve discovery log from existing discovery
>      controller
>    fabrics: add --quiet option
>    fabrics: systemd and udev support for automatic discovery log changes
> 
>   Makefile                                | 10 +++-
>   fabrics.c                               | 62 ++++++++++++++++++-------
>   fabrics.h                               |  2 +
>   nvme.spec.in                            |  7 +++
>   systemd/nvmefc-boot-connections.service |  9 aw++++
>   systemd/nvmf-connect at .service           | 15 ++++++
>   udev/70-nvmf-autoconnect.rules          | 11 +++++
>   7 files changed, 98 insertions(+), 18 deletions(-)
>   create mode 100644 systemd/nvmefc-boot-connections.service
>   create mode 100644 systemd/nvmf-connect at .service
>   create mode 100644 udev/70-nvmf-autoconnect.rules
> 
Hmm.

Last time I've posted the autoconnect scripts you said you'd rather have 
them updated to be useful for generic discovery events.
So what happened to that?
As it stands (and as posted here) these scripts are pretty much 
independent on the remaining series as they react to the existent FC 
NVMe uevents.

And the one thing which I patently dislike about those scripts is this 
utterly weird interface.
Problem here is that abstract systemd services (ie those ending with an 
'@' sign) can pass exactly _one_ argument, namely that one following the 
'@' sign.
But for nvme we need to pass in several arguments, one for each 
command-line parameter.

The way it's solved nowadays is to insert a 'tab' character (\t) between 
each argument, use the combined string as parameter to the systemd 
service, and call 'echo' to split the parameters again.
Shudder.

What I really want to do here is to add a '--dry-run' option to 
nvme-cli, which then would instruct nvme-cli to _not_ write the final 
string to the ioctl interface but rather to stdout.
This string will not contain any spaces, and have each argument (minus 
any leading dashes) concatenated by commas.
So it should be possible to use this string as argument for the systemd 
service.
Then we only need to complementary call --argstr to accept this string 
for nvme-cli, re-parse it into the internal structure, and continue with 
the operation.

I do have some patches, but still need to polish them off (and possibly 
test them :-) before sending.

Cheers,

Hannes

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-02-24 17:17 ` [PATCH nvme-cli rfc 0/6] Support discovery log change events Hannes Reinecke
@ 2019-02-24 22:33   ` Sagi Grimberg
  2019-03-09  2:18     ` Sagi Grimberg
  2019-03-13 21:04     ` James Smart
  0 siblings, 2 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-02-24 22:33 UTC (permalink / raw)



> Hmm.
> 
> Last time I've posted the autoconnect scripts you said you'd rather have 
> them updated to be useful for generic discovery events.
> So what happened to that?

I would like to have a single event, but the one difference is that
the discovery log page events come on a discovery controller and in
the fc case we don't have such a controller (the NVME_INSTANCE env).

If we can resolve this difference we can unite and have a single udev
handler for both...

> As it stands (and as posted here) these scripts are pretty much 
> independent on the remaining series as they react to the existent FC 
> NVMe uevents.

Well at least the nvmf-connect at .service is shared...

> And the one thing which I patently dislike about those scripts is this 
> utterly weird interface.
> Problem here is that abstract systemd services (ie those ending with an 
> '@' sign) can pass exactly _one_ argument, namely that one following the 
> '@' sign.
> But for nvme we need to pass in several arguments, one for each 
> command-line parameter.
> 
> The way it's solved nowadays is to insert a 'tab' character (\t) between 
> each argument, use the combined string as parameter to the systemd 
> service, and call 'echo' to split the parameters again.
> Shudder.

Can't say I like it either...

> What I really want to do here is to add a '--dry-run' option to 
> nvme-cli, which then would instruct nvme-cli to _not_ write the final 
> string to the ioctl interface but rather to stdout.
> This string will not contain any spaces, and have each argument (minus 
> any leading dashes) concatenated by commas.
> So it should be possible to use this string as argument for the systemd 
> service.
> Then we only need to complementary call --argstr to accept this string 
> for nvme-cli, re-parse it into the internal structure, and continue with 
> the operation.
> 
> I do have some patches, but still need to polish them off (and possibly 
> test them :-) before sending.

Not sure if that is any better, but lets see patches...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-02-24 22:33   ` Sagi Grimberg
@ 2019-03-09  2:18     ` Sagi Grimberg
  2019-03-13 21:04     ` James Smart
  1 sibling, 0 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-03-09  2:18 UTC (permalink / raw)



>> What I really want to do here is to add a '--dry-run' option to 
>> nvme-cli, which then would instruct nvme-cli to _not_ write the final 
>> string to the ioctl interface but rather to stdout.
>> This string will not contain any spaces, and have each argument (minus 
>> any leading dashes) concatenated by commas.
>> So it should be possible to use this string as argument for the 
>> systemd service.
>> Then we only need to complementary call --argstr to accept this string 
>> for nvme-cli, re-parse it into the internal structure, and continue 
>> with the operation.
>>
>> I do have some patches, but still need to polish them off (and 
>> possibly test them :-) before sending.
> 
> Not sure if that is any better, but lets see patches...

Hannes,

Did you get around to looking into this?

I didn't get any feedback from others on this btw, would be
good to know what people think assuming we do want to support
discovery change log events...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-02-24 22:33   ` Sagi Grimberg
  2019-03-09  2:18     ` Sagi Grimberg
@ 2019-03-13 21:04     ` James Smart
  2019-03-14  0:00       ` Sagi Grimberg
  1 sibling, 1 reply; 34+ messages in thread
From: James Smart @ 2019-03-13 21:04 UTC (permalink / raw)




On 2/24/2019 2:33 PM, Sagi Grimberg wrote:
>
>> Hmm.
>>
>> Last time I've posted the autoconnect scripts you said you'd rather 
>> have them updated to be useful for generic discovery events.
>> So what happened to that?
>
> I would like to have a single event, but the one difference is that
> the discovery log page events come on a discovery controller and in
> the fc case we don't have such a controller (the NVME_INSTANCE env).
>
> If we can resolve this difference we can unite and have a single udev
> handler for both...

We shouldn't unite them. They are very different things.? One is an 
event that says I found a discovery controller go probe it, while the 
other is a discovery controller saying I have a discovery log entry go 
connect to it.?? My position is we should have both.


>
>> As it stands (and as posted here) these scripts are pretty much 
>> independent on the remaining series as they react to the existent FC 
>> NVMe uevents.
>
> Well at least the nvmf-connect at .service is shared...
>
>> And the one thing which I patently dislike about those scripts is 
>> this utterly weird interface.
>> Problem here is that abstract systemd services (ie those ending with 
>> an '@' sign) can pass exactly _one_ argument, namely that one 
>> following the '@' sign.
>> But for nvme we need to pass in several arguments, one for each 
>> command-line parameter.
>>
>> The way it's solved nowadays is to insert a 'tab' character (\t) 
>> between each argument, use the combined string as parameter to the 
>> systemd service, and call 'echo' to split the parameters again.
>> Shudder.
>
> Can't say I like it either...
>
>> What I really want to do here is to add a '--dry-run' option to 
>> nvme-cli, which then would instruct nvme-cli to _not_ write the final 
>> string to the ioctl interface but rather to stdout.
>> This string will not contain any spaces, and have each argument 
>> (minus any leading dashes) concatenated by commas.
>> So it should be possible to use this string as argument for the 
>> systemd service.
>> Then we only need to complementary call --argstr to accept this 
>> string for nvme-cli, re-parse it into the internal structure, and 
>> continue with the operation.
>>
>> I do have some patches, but still need to polish them off (and 
>> possibly test them :-) before sending.
>
> Not sure if that is any better, but lets see patches...

any updates on these ?

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-13 21:04     ` James Smart
@ 2019-03-14  0:00       ` Sagi Grimberg
  2019-03-14 20:43         ` James Smart
  0 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-03-14  0:00 UTC (permalink / raw)



>>> Hmm.
>>>
>>> Last time I've posted the autoconnect scripts you said you'd rather 
>>> have them updated to be useful for generic discovery events.
>>> So what happened to that?
>>
>> I would like to have a single event, but the one difference is that
>> the discovery log page events come on a discovery controller and in
>> the fc case we don't have such a controller (the NVME_INSTANCE env).
>>
>> If we can resolve this difference we can unite and have a single udev
>> handler for both...
> 
> We shouldn't unite them. They are very different things.? One is an 
> event that says I found a discovery controller go probe it, while the 
> other is a discovery controller saying I have a discovery log entry go 
> connect to it.?? My position is we should have both.

Well I agree with you, but these the question is would userspace do
something different for each? As of now it doesn't.

>>> I do have some patches, but still need to polish them off (and 
>>> possibly test them :-) before sending.
>>
>> Not sure if that is any better, but lets see patches...
> 
> any updates on these ?

Good question...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-14  0:00       ` Sagi Grimberg
@ 2019-03-14 20:43         ` James Smart
  2019-03-14 20:50           ` James Smart
  2019-03-14 21:41           ` Sagi Grimberg
  0 siblings, 2 replies; 34+ messages in thread
From: James Smart @ 2019-03-14 20:43 UTC (permalink / raw)




On 3/13/2019 5:00 PM, Sagi Grimberg wrote:
>
>>>> Hmm.
>>>>
>>>> Last time I've posted the autoconnect scripts you said you'd rather 
>>>> have them updated to be useful for generic discovery events.
>>>> So what happened to that?
>>>
>>> I would like to have a single event, but the one difference is that
>>> the discovery log page events come on a discovery controller and in
>>> the fc case we don't have such a controller (the NVME_INSTANCE env).
>>>
>>> If we can resolve this difference we can unite and have a single udev
>>> handler for both...
>>
>> We shouldn't unite them. They are very different things.? One is an 
>> event that says I found a discovery controller go probe it, while the 
>> other is a discovery controller saying I have a discovery log entry 
>> go connect to it.?? My position is we should have both.
>
> Well I agree with you, but these the question is would userspace do
> something different for each? As of now it doesn't.

I would think it would do something different. What we don't do well 
currently is specify per-connection parameters - when automated. That's 
where I see the difference.

I would think parameters for the discovery controller should be separate 
from parameters for? storage controllers. So the scripts could use two 
different config files to look up defaults or per-connect values.

For example, the discovery controller probe request can look at a 
discovery_connect.cfg, obtain defaults or a tr-addr-based match for say 
ctrl-loss-timeout, whether to be long-lived, and if long-lived what the 
keep-alive-tmo should be.

While the connect request can look at storage_connect.cfg, use the 
tr-addr-values and nqn to determine if it should not be connected to 
(default is yes), and if connecting, defaults or any tuning based on nqn 
or tr-addr-values to use for the connect.


>
>>>> I do have some patches, but still need to polish them off (and 
>>>> possibly test them :-) before sending.
>>>
>>> Not sure if that is any better, but lets see patches...
>>
>> any updates on these ?
>
> Good question...

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-14 20:43         ` James Smart
@ 2019-03-14 20:50           ` James Smart
  2019-03-15 12:40             ` Hannes Reinecke
  2019-03-14 21:41           ` Sagi Grimberg
  1 sibling, 1 reply; 34+ messages in thread
From: James Smart @ 2019-03-14 20:50 UTC (permalink / raw)




On 3/14/2019 1:43 PM, James Smart wrote:
>
>
> On 3/13/2019 5:00 PM, Sagi Grimberg wrote:
>>
>>>>> Hmm.
>>>>>
>>>>> Last time I've posted the autoconnect scripts you said you'd 
>>>>> rather have them updated to be useful for generic discovery events.
>>>>> So what happened to that?
>>>>
>>>> I would like to have a single event, but the one difference is that
>>>> the discovery log page events come on a discovery controller and in
>>>> the fc case we don't have such a controller (the NVME_INSTANCE env).
>>>>
>>>> If we can resolve this difference we can unite and have a single udev
>>>> handler for both...
>>>
>>> We shouldn't unite them. They are very different things.? One is an 
>>> event that says I found a discovery controller go probe it, while 
>>> the other is a discovery controller saying I have a discovery log 
>>> entry go connect to it.?? My position is we should have both.
>>
>> Well I agree with you, but these the question is would userspace do
>> something different for each? As of now it doesn't.
>
> I would think it would do something different. What we don't do well 
> currently is specify per-connection parameters - when automated. 
> That's where I see the difference.
>
> I would think parameters for the discovery controller should be 
> separate from parameters for? storage controllers. So the scripts 
> could use two different config files to look up defaults or 
> per-connect values.
>
> For example, the discovery controller probe request can look at a 
> discovery_connect.cfg, obtain defaults or a tr-addr-based match for 
> say ctrl-loss-timeout, whether to be long-lived, and if long-lived 
> what the keep-alive-tmo should be.
>
> While the connect request can look at storage_connect.cfg, use the 
> tr-addr-values and nqn to determine if it should not be connected to 
> (default is yes), and if connecting, defaults or any tuning based on 
> nqn or tr-addr-values to use for the connect.
>

These are still very similar - but I guess what separated things for me 
was the defaults for discovery vs storage would be different.

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-14 20:43         ` James Smart
  2019-03-14 20:50           ` James Smart
@ 2019-03-14 21:41           ` Sagi Grimberg
  1 sibling, 0 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-03-14 21:41 UTC (permalink / raw)



>>> We shouldn't unite them. They are very different things.? One is an 
>>> event that says I found a discovery controller go probe it, while the 
>>> other is a discovery controller saying I have a discovery log entry 
>>> go connect to it.?? My position is we should have both.
>>
>> Well I agree with you, but these the question is would userspace do
>> something different for each? As of now it doesn't.
> 
> I would think it would do something different. What we don't do well 
> currently is specify per-connection parameters - when automated. That's 
> where I see the difference.
> 
> I would think parameters for the discovery controller should be separate 
> from parameters for? storage controllers. So the scripts could use two 
> different config files to look up defaults or per-connect values.
> 
> For example, the discovery controller probe request can look at a 
> discovery_connect.cfg, obtain defaults or a tr-addr-based match for say 
> ctrl-loss-timeout, whether to be long-lived, and if long-lived what the 
> keep-alive-tmo should be.
> 
> While the connect request can look at storage_connect.cfg, use the 
> tr-addr-values and nqn to determine if it should not be connected to 
> (default is yes), and if connecting, defaults or any tuning based on nqn 
> or tr-addr-values to use for the connect.

I do have a plan to add something like params.cfg that nvme connect
would assemble for the connect string, but isn't that orthogonal
to the the action taken for both events?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-14 20:50           ` James Smart
@ 2019-03-15 12:40             ` Hannes Reinecke
  2019-03-15 20:38               ` Sagi Grimberg
  0 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2019-03-15 12:40 UTC (permalink / raw)


On 3/14/19 9:50 PM, James Smart wrote:
> 
> 
> On 3/14/2019 1:43 PM, James Smart wrote:
>>
>>
>> On 3/13/2019 5:00 PM, Sagi Grimberg wrote:
>>>
>>>>>> Hmm.
>>>>>>
>>>>>> Last time I've posted the autoconnect scripts you said you'd 
>>>>>> rather have them updated to be useful for generic discovery events.
>>>>>> So what happened to that?
>>>>>
>>>>> I would like to have a single event, but the one difference is that
>>>>> the discovery log page events come on a discovery controller and in
>>>>> the fc case we don't have such a controller (the NVME_INSTANCE env).
>>>>>
>>>>> If we can resolve this difference we can unite and have a single udev
>>>>> handler for both...
>>>>
>>>> We shouldn't unite them. They are very different things.? One is an 
>>>> event that says I found a discovery controller go probe it, while 
>>>> the other is a discovery controller saying I have a discovery log 
>>>> entry go connect to it.?? My position is we should have both.
>>>
>>> Well I agree with you, but these the question is would userspace do
>>> something different for each? As of now it doesn't.
>>
>> I would think it would do something different. What we don't do well 
>> currently is specify per-connection parameters - when automated. 
>> That's where I see the difference.
>>
>> I would think parameters for the discovery controller should be 
>> separate from parameters for? storage controllers. So the scripts 
>> could use two different config files to look up defaults or 
>> per-connect values.
>>
>> For example, the discovery controller probe request can look at a 
>> discovery_connect.cfg, obtain defaults or a tr-addr-based match for 
>> say ctrl-loss-timeout, whether to be long-lived, and if long-lived 
>> what the keep-alive-tmo should be.
>>
>> While the connect request can look at storage_connect.cfg, use the 
>> tr-addr-values and nqn to determine if it should not be connected to 
>> (default is yes), and if connecting, defaults or any tuning based on 
>> nqn or tr-addr-values to use for the connect.
>>
> 
> These are still very similar - but I guess what separated things for me 
> was the defaults for discovery vs storage would be different.
> 

I do have some quite fundamental problem with discovery log change events:
- The initial discovery is still unsolved, ie we have to know the 
transport address before we can even get to those events
- More crucially, we don't get these events when we to the first 
connection, requiring us to have the initial call to 'connect-all' on a 
separate way.
- We don't have any indication if a new subsystem has been created; once 
a subsystem a new subsystem on another port is created it's anyones 
guess if that'll show up on the particular discovery controller

Especially the second point is most vexing; we could get around this by 
requiring the discovery controller to send AENs on the initial connect
(well, actually as soon as the AENs are enabled), but I guess that'll 
require a change to the spec ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-15 12:40             ` Hannes Reinecke
@ 2019-03-15 20:38               ` Sagi Grimberg
  2019-04-25 19:10                 ` Sagi Grimberg
  0 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-03-15 20:38 UTC (permalink / raw)



> I do have some quite fundamental problem with discovery log change events:
> - The initial discovery is still unsolved, ie we have to know the 
> transport address before we can even get to those events

I understand your concern, but its completely irrelevant to this.
This is a building block to automate the entire sequence, but
the fact that we don't have it all automated doesn't mean we should
not support discovery log change AENs.

For root discovery automation, we can talk about bringing a tpar
with a prototype.

> - More crucially, we don't get these events when we to the first 
> connection, requiring us to have the initial call to 'connect-all' on a 
> separate way.

Again, this is initial root discovery, which is irrelevant to discovery
log change events.

> - We don't have any indication if a new subsystem has been created; once 
> a subsystem a new subsystem on another port is created it's anyones 
> guess if that'll show up on the particular discovery controller

What? the event is on the port level, every port/subsystem softlink will
trigger a discovery log change event. Its not anyones guess, its pretty
deterministic given that you have discovery controller connected to the
port.

> Especially the second point is most vexing; we could get around this by 
> requiring the discovery controller to send AENs on the initial connect
> (well, actually as soon as the AENs are enabled), but I guess that'll 
> require a change to the spec ...

Why? with a script to auto-connect or root discovery automation this is 
no longer an issue (not sure how much this is an issue right now).

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-03-15 20:38               ` Sagi Grimberg
@ 2019-04-25 19:10                 ` Sagi Grimberg
  2019-04-26 14:19                   ` Hannes Reinecke
  0 siblings, 1 reply; 34+ messages in thread
From: Sagi Grimberg @ 2019-04-25 19:10 UTC (permalink / raw)



>> I do have some quite fundamental problem with discovery log change 
>> events:
>> - The initial discovery is still unsolved, ie we have to know the 
>> transport address before we can even get to those events
> 
> I understand your concern, but its completely irrelevant to this.
> This is a building block to automate the entire sequence, but
> the fact that we don't have it all automated doesn't mean we should
> not support discovery log change AENs.
> 
> For root discovery automation, we can talk about bringing a tpar
> with a prototype.
> 
>> - More crucially, we don't get these events when we to the first 
>> connection, requiring us to have the initial call to 'connect-all' on 
>> a separate way.
> 
> Again, this is initial root discovery, which is irrelevant to discovery
> log change events.
> 
>> - We don't have any indication if a new subsystem has been created; 
>> once a subsystem a new subsystem on another port is created it's 
>> anyones guess if that'll show up on the particular discovery controller
> 
> What? the event is on the port level, every port/subsystem softlink will
> trigger a discovery log change event. Its not anyones guess, its pretty
> deterministic given that you have discovery controller connected to the
> port.
> 
>> Especially the second point is most vexing; we could get around this 
>> by requiring the discovery controller to send AENs on the initial connect
>> (well, actually as soon as the AENs are enabled), but I guess that'll 
>> require a change to the spec ...
> 
> Why? with a script to auto-connect or root discovery automation this is 
> no longer an issue (not sure how much this is an issue right now).

Reviving this (again).

Hannes, did you have a chance to look at this? Unless there are other
alternatives I suggest we go with this approach.

We should be able to support discovery log change events automatically
at some point...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-25 19:10                 ` Sagi Grimberg
@ 2019-04-26 14:19                   ` Hannes Reinecke
  2019-04-26 15:46                     ` Sagi Grimberg
  0 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2019-04-26 14:19 UTC (permalink / raw)


On 4/25/19 9:10 PM, Sagi Grimberg wrote:
> 
>>> I do have some quite fundamental problem with discovery log change 
>>> events:
>>> - The initial discovery is still unsolved, ie we have to know the 
>>> transport address before we can even get to those events
>>
>> I understand your concern, but its completely irrelevant to this.
>> This is a building block to automate the entire sequence, but
>> the fact that we don't have it all automated doesn't mean we should
>> not support discovery log change AENs.
>>
>> For root discovery automation, we can talk about bringing a tpar
>> with a prototype.
>>
>>> - More crucially, we don't get these events when we to the first 
>>> connection, requiring us to have the initial call to 'connect-all' on 
>>> a separate way.
>>
>> Again, this is initial root discovery, which is irrelevant to discovery
>> log change events.
>>
>>> - We don't have any indication if a new subsystem has been created; 
>>> once a subsystem a new subsystem on another port is created it's 
>>> anyones guess if that'll show up on the particular discovery controller
>>
>> What? the event is on the port level, every port/subsystem softlink will
>> trigger a discovery log change event. Its not anyones guess, its pretty
>> deterministic given that you have discovery controller connected to the
>> port.
>>
>>> Especially the second point is most vexing; we could get around this 
>>> by requiring the discovery controller to send AENs on the initial 
>>> connect
>>> (well, actually as soon as the AENs are enabled), but I guess that'll 
>>> require a change to the spec ...
>>
>> Why? with a script to auto-connect or root discovery automation this 
>> is no longer an issue (not sure how much this is an issue right now).
> 
> Reviving this (again).
> 
> Hannes, did you have a chance to look at this? Unless there are other
> alternatives I suggest we go with this approach.
> 
> We should be able to support discovery log change events automatically
> at some point...

Yes, I did.
And had several discussions with the involved parties (Hello, James :-).
Consensus was that for FC we don't really need discovery AENs, as the 
transport has knowledge about the fabric and the attached ports, and 
will generate events on any topology change already.
For RDMA and TCP, OTOH, we don't know the topology, and to be able to 
discovery we will know the fabric ports in advance.
So there automatic discovery is a real win, and arguably necessary if a 
connection needs to be re-established after a fabric outage.

Implementation-wise I'm not thoroughly happy with the MDNS approach, as 
this requires quite a bit of additional infrastructure which needs to be 
present on both sides.
And my worry is that during failover scenarios we might not be able to 
run these services at all (seeing that the root-fs is blocked).
But if that is acceptable (read: if booting from those NVMe devices is 
not required) then I guess we could go that way.

At least it's worth a shot, to get things started here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 14:19                   ` Hannes Reinecke
@ 2019-04-26 15:46                     ` Sagi Grimberg
  2019-04-26 16:17                       ` James Smart
  2019-04-27 10:53                       ` Hannes Reinecke
  0 siblings, 2 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-04-26 15:46 UTC (permalink / raw)



>> Reviving this (again).
>>
>> Hannes, did you have a chance to look at this? Unless there are other
>> alternatives I suggest we go with this approach.
>>
>> We should be able to support discovery log change events automatically
>> at some point...
> 
> Yes, I did.
> And had several discussions with the involved parties (Hello, James :-).
> Consensus was that for FC we don't really need discovery AENs, as the 
> transport has knowledge about the fabric and the attached ports, and 
> will generate events on any topology change already.
> For RDMA and TCP, OTOH, we don't know the topology, and to be able to 
> discovery we will know the fabric ports in advance.
> So there automatic discovery is a real win, and arguably necessary if a 
> connection needs to be re-established after a fabric outage.

Again, this is a different thing Hannes, please stop mixing stuff.
I'm not arguing with you that we want automatically discover the fabric,
but discovery change log events and root discovery are different things
and only one of them is up for discussion here.

> Implementation-wise I'm not thoroughly happy with the MDNS approach, as 
> this requires quite a bit of additional infrastructure which needs to be 
> present on both sides.
> And my worry is that during failover scenarios we might not be able to 
> run these services at all (seeing that the root-fs is blocked).
> But if that is acceptable (read: if booting from those NVMe devices is 
> not required) then I guess we could go that way.
I'm going to ignore stuff about root discovery here (nothing personal,
its just not the subject here), we can talk about it in a separate
thread.

Now, I'm suggesting to support discovery change log events by allowing
the user to setup a persistent discovery controller and have the kernel
fire to userspace discovery change log events over that controller.

Any comments/feedback/alternatives on _this_?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 15:46                     ` Sagi Grimberg
@ 2019-04-26 16:17                       ` James Smart
  2019-04-26 19:10                         ` Sagi Grimberg
  2019-04-27 10:53                       ` Hannes Reinecke
  1 sibling, 1 reply; 34+ messages in thread
From: James Smart @ 2019-04-26 16:17 UTC (permalink / raw)



On 4/26/2019 8:46 AM, Sagi Grimberg wrote:
>
>>> Reviving this (again).
>>>
>>> Hannes, did you have a chance to look at this? Unless there are other
>>> alternatives I suggest we go with this approach.
>>>
>>> We should be able to support discovery log change events automatically
>>> at some point...
>>
>> Yes, I did.
>> And had several discussions with the involved parties (Hello, James :-).
>> Consensus was that for FC we don't really need discovery AENs, as the 
>> transport has knowledge about the fabric and the attached ports, and 
>> will generate events on any topology change already.
>> For RDMA and TCP, OTOH, we don't know the topology, and to be able to 
>> discovery we will know the fabric ports in advance.
>> So there automatic discovery is a real win, and arguably necessary if 
>> a connection needs to be re-established after a fabric outage.
>
> Again, this is a different thing Hannes, please stop mixing stuff.
> I'm not arguing with you that we want automatically discover the fabric,
> but discovery change log events and root discovery are different things
> and only one of them is up for discussion here.
>
>> Implementation-wise I'm not thoroughly happy with the MDNS approach, 
>> as this requires quite a bit of additional infrastructure which needs 
>> to be present on both sides.
>> And my worry is that during failover scenarios we might not be able 
>> to run these services at all (seeing that the root-fs is blocked).
>> But if that is acceptable (read: if booting from those NVMe devices 
>> is not required) then I guess we could go that way.
> I'm going to ignore stuff about root discovery here (nothing personal,
> its just not the subject here), we can talk about it in a separate
> thread.
>
> Now, I'm suggesting to support discovery change log events by allowing
> the user to setup a persistent discovery controller and have the kernel
> fire to userspace discovery change log events over that controller.
>
> Any comments/feedback/alternatives on _this_?

I think it's fine, and we should be able to merge possibly what the AEN 
event looks like and the FC event.? The Events are pretty much the same 
between FC and the AEN -> go perform a connect-all against the indicated 
discovery controller.?? What is different is how do you identify the 
discovery controller, and if you are re-using an an existing controller, 
how would that connect-all occur if it doesn't create a new discovery 
controller instance.? For how to identify the discovery controller, it 
could be: a) by full discovery-log-based-address fields? (not all must 
be present; e.g. FC);? b) by /dev/nvme? name of the persistent discovery 
controller (e.g. the AEN); or c) a derivation of the /dev/nvme? name 
(aka (b)) by using (a) to see if a persistent controller exists at that 
location.
Am I getting a little off base ?

I agree - we don't need to address is how does one initially create the 
persistent discovery controller.

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 16:17                       ` James Smart
@ 2019-04-26 19:10                         ` Sagi Grimberg
  2019-04-26 20:14                           ` James Smart
  2019-04-27 11:05                           ` Hannes Reinecke
  0 siblings, 2 replies; 34+ messages in thread
From: Sagi Grimberg @ 2019-04-26 19:10 UTC (permalink / raw)



>> Now, I'm suggesting to support discovery change log events by allowing
>> the user to setup a persistent discovery controller and have the kernel
>> fire to userspace discovery change log events over that controller.
>>
>> Any comments/feedback/alternatives on _this_?
> 
> I think it's fine, and we should be able to merge possibly what the AEN 
> event looks like and the FC event.? The Events are pretty much the same 
> between FC and the AEN -> go perform a connect-all against the indicated 
> discovery controller.?? What is different is how do you identify the 
> discovery controller, and if you are re-using an an existing controller, 
> how would that connect-all occur if it doesn't create a new discovery 
> controller instance.? For how to identify the discovery controller, it 
> could be: a) by full discovery-log-based-address fields? (not all must 
> be present; e.g. FC);? b) by /dev/nvme? name of the persistent discovery 
> controller (e.g. the AEN); or c) a derivation of the /dev/nvme? name 
> (aka (b)) by using (a) to see if a persistent controller exists at that 
> location.
> Am I getting a little off base ?

Not at all,

Given that FC does not need a persistent discovery controller to get the
event, its fine that it will create a temporary discovery controller.
Given that FC and IP do different things, we can separate them. But what
I wanted to understand, is if I export a second subsystem on the same
FC port, will it generate an FC event?

The only thing we need to make sure works is when FC also has a
persistent discovery controller and it will see both FC events and
nvme discovery log change events (not sure its a valid use-case at all
though). I don't see how that could break because if I understand
correctly it will see two events and worst case will fail to connect
duplicate controller quietly which is fine I guess...


> I agree - we don't need to address is how does one initially create the 
> persistent discovery controller.

Exactly, we can worry about that afterwards, once we have a TP in place.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 19:10                         ` Sagi Grimberg
@ 2019-04-26 20:14                           ` James Smart
  2019-05-06 22:38                             ` Arun Easi
  2019-04-27 11:05                           ` Hannes Reinecke
  1 sibling, 1 reply; 34+ messages in thread
From: James Smart @ 2019-04-26 20:14 UTC (permalink / raw)


On 4/26/2019 12:10 PM, Sagi Grimberg wrote:
>
>>> Now, I'm suggesting to support discovery change log events by allowing
>>> the user to setup a persistent discovery controller and have the kernel
>>> fire to userspace discovery change log events over that controller.
>>>
>>> Any comments/feedback/alternatives on _this_?
>>
>> I think it's fine, and we should be able to merge possibly what the 
>> AEN event looks like and the FC event.? The Events are pretty much 
>> the same between FC and the AEN -> go perform a connect-all against 
>> the indicated discovery controller.?? What is different is how do you 
>> identify the discovery controller, and if you are re-using an an 
>> existing controller, how would that connect-all occur if it doesn't 
>> create a new discovery controller instance.? For how to identify the 
>> discovery controller, it could be: a) by full 
>> discovery-log-based-address fields? (not all must be present; e.g. 
>> FC);? b) by /dev/nvme? name of the persistent discovery controller 
>> (e.g. the AEN); or c) a derivation of the /dev/nvme? name (aka (b)) 
>> by using (a) to see if a persistent controller exists at that location.
>> Am I getting a little off base ?
>
> Not at all,
>
> Given that FC does not need a persistent discovery controller to get the
> event, its fine that it will create a temporary discovery controller.
> Given that FC and IP do different things, we can separate them. But what
> I wanted to understand, is if I export a second subsystem on the same
> FC port, will it generate an FC event?

Well - FC does have an RSCN notification to cause the rescan of the FC 
port, if the FC port as the NVME Discovery attribute, but Christoph 
rejected the 3 patches about a year ago. I'll be reposting it. So the 
answer should eventually be - yes, it will get an FC event if the 
subsystem list at the FC port changes.


>
> The only thing we need to make sure works is when FC also has a
> persistent discovery controller and it will see both FC events and
> nvme discovery log change events (not sure its a valid use-case at all
> though). I don't see how that could break because if I understand
> correctly it will see two events and worst case will fail to connect
> duplicate controller quietly which is fine I guess...

it shouldn't hurt/matter.? it can be a valid use case. FC already has 
the issue that link bounces and manual actions can create the same 
discovery controller being talked to simultaneously. We allow duplicate 
discovery controllers to the same <host,host_port,tgt_port,subsystem> - 
actually I think that "is this a discovery controller" check is in 
common code around the duplicates code. ?? But duplicate storage 
controllers to that tuple are by default not allowed, but can be 
over-ridden. ?? Regardless of it being ok, it should be clean. ? A "-d 
<devicename>" option (different connect-all tuple argument from the 
event) would make that pretty clean.

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 15:46                     ` Sagi Grimberg
  2019-04-26 16:17                       ` James Smart
@ 2019-04-27 10:53                       ` Hannes Reinecke
  1 sibling, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2019-04-27 10:53 UTC (permalink / raw)


On 4/26/19 5:46 PM, Sagi Grimberg wrote:
> 
>>> Reviving this (again).
>>>
>>> Hannes, did you have a chance to look at this? Unless there are other
>>> alternatives I suggest we go with this approach.
>>>
>>> We should be able to support discovery log change events automatically
>>> at some point...
>>
>> Yes, I did.
>> And had several discussions with the involved parties (Hello, James :-).
>> Consensus was that for FC we don't really need discovery AENs, as the 
>> transport has knowledge about the fabric and the attached ports, and 
>> will generate events on any topology change already.
>> For RDMA and TCP, OTOH, we don't know the topology, and to be able to 
>> discovery we will know the fabric ports in advance.
>> So there automatic discovery is a real win, and arguably necessary if 
>> a connection needs to be re-established after a fabric outage.
> 
> Again, this is a different thing Hannes, please stop mixing stuff.
> I'm not arguing with you that we want automatically discover the fabric,
> but discovery change log events and root discovery are different things
> and only one of them is up for discussion here.
> 
OK.

>> Implementation-wise I'm not thoroughly happy with the MDNS approach, 
>> as this requires quite a bit of additional infrastructure which needs 
>> to be present on both sides.
>> And my worry is that during failover scenarios we might not be able to 
>> run these services at all (seeing that the root-fs is blocked).
>> But if that is acceptable (read: if booting from those NVMe devices is 
>> not required) then I guess we could go that way.
> I'm going to ignore stuff about root discovery here (nothing personal,
> its just not the subject here), we can talk about it in a separate
> thread.
> 
> Now, I'm suggesting to support discovery change log events by allowing
> the user to setup a persistent discovery controller and have the kernel
> fire to userspace discovery change log events over that controller.
> 
> Any comments/feedback/alternatives on _this_?

Perfectly fine with that.
I just would like to have the possibility to generate the discovery AEN 
on the initial connect of a persistent discovery controller (ie treat 
the initial connect as a discovery log change event).
If we had the initial event we would be able to have a unified method 
for handling discovery events; otherwise we'd have to have a separate 
mechanism for the initial connect (where we need to create the 
persistent connection _and_ parse the disocvery log) and AENs (where we 
receive an uevent and kick off a systemd service).
But it might be that we'd need to update the protocol for that, though.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare at suse.de                              +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 19:10                         ` Sagi Grimberg
  2019-04-26 20:14                           ` James Smart
@ 2019-04-27 11:05                           ` Hannes Reinecke
  1 sibling, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2019-04-27 11:05 UTC (permalink / raw)


On 4/26/19 9:10 PM, Sagi Grimberg wrote:
> 
>>> Now, I'm suggesting to support discovery change log events by allowing
>>> the user to setup a persistent discovery controller and have the kernel
>>> fire to userspace discovery change log events over that controller.
>>>
>>> Any comments/feedback/alternatives on _this_?
>>
>> I think it's fine, and we should be able to merge possibly what the 
>> AEN event looks like and the FC event.? The Events are pretty much the 
>> same between FC and the AEN -> go perform a connect-all against the 
>> indicated discovery controller.?? What is different is how do you 
>> identify the discovery controller, and if you are re-using an an 
>> existing controller, how would that connect-all occur if it doesn't 
>> create a new discovery controller instance.? For how to identify the 
>> discovery controller, it could be: a) by full 
>> discovery-log-based-address fields? (not all must be present; e.g. 
>> FC);? b) by /dev/nvme? name of the persistent discovery controller 
>> (e.g. the AEN); or c) a derivation of the /dev/nvme? name (aka (b)) by 
>> using (a) to see if a persistent controller exists at that location.
>> Am I getting a little off base ?
> 
> Not at all,
> 
> Given that FC does not need a persistent discovery controller to get the
> event, its fine that it will create a temporary discovery controller.
> Given that FC and IP do different things, we can separate them. But what
> I wanted to understand, is if I export a second subsystem on the same
> FC port, will it generate an FC event?
> 
Typically not.
FC uevents are generated on rports coming and going, hence we won't 
normally see an additional subsystem.
However, you could tweak the target implementation to relogin, at which 
point you obviously will be getting an event.

> The only thing we need to make sure works is when FC also has a
> persistent discovery controller and it will see both FC events and
> nvme discovery log change events (not sure its a valid use-case at all
> though). I don't see how that could break because if I understand
> correctly it will see two events and worst case will fail to connect
> duplicate controller quietly which is fine I guess...
> 
If we stick with the current model of forwarding uevents to a systemd 
service we just need to make sure that both events will start the _same_ 
systemd service. Then we'd just have a duplicate 'start' call for the 
same service, which will be bunched together into a single operation by 
systemd.

And even if we do not manage to do so, we already have duplicate calls 
to the systemd service eg on link bouncing, so we need to (and already 
do) handle that one.

So duplicate events is not really a problem; it's just inefficient.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare at suse.de                              +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-04-26 20:14                           ` James Smart
@ 2019-05-06 22:38                             ` Arun Easi
  2019-05-07 16:47                               ` James Smart
  0 siblings, 1 reply; 34+ messages in thread
From: Arun Easi @ 2019-05-06 22:38 UTC (permalink / raw)


Hi James,

On Fri, 26 Apr 2019, 1:14pm, James Smart wrote:

> On 4/26/2019 12:10 PM, Sagi Grimberg wrote:
> >
> > Given that FC does not need a persistent discovery controller to get the
> > event, its fine that it will create a temporary discovery controller.
> > Given that FC and IP do different things, we can separate them. But what
> > I wanted to understand, is if I export a second subsystem on the same
> > FC port, will it generate an FC event?
> 
> Well - FC does have an RSCN notification to cause the rescan of the FC 
> port, if the FC port as the NVME Discovery attribute, but Christoph 
> rejected the 3 patches about a year ago. I'll be reposting it. So the 
> answer should eventually be - yes, it will get an FC event if the 
> subsystem list at the FC port changes.
> 

So, just wondering, for the case of subsystem addition to an existing FC 
port, is it recommended or mandated in the spec that the target FC port 
send out a RSCN? The RSCN should be from the target FC port, not the 
switch, right? I could not find such a reference in the FC-NVME spec 
version I have (r1.15).

On a related note, from a FC LLDD perspective, then, are they required to 
invoke nvme_fc_rescan_remoteport() upon RSCN reception?

Regards,
-Arun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none"
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none" Sagi Grimberg
@ 2019-05-07  9:17   ` Max Gurtovoy
  2019-05-07 17:04     ` James Smart
  0 siblings, 1 reply; 34+ messages in thread
From: Max Gurtovoy @ 2019-05-07  9:17 UTC (permalink / raw)



On 2/23/2019 4:32 AM, Sagi Grimberg wrote:
> As we want to support discovery uevents over different
> transports, we want to allow the kernel to provide missing
> information in the form of none and have nvme-cli properly
> ignore it.
>
> One example is the host_traddr. If it is not set (which means
> that the default route determined the host address) we will
> want to do the same for newly discovered controllers.
>
> So allow users to pass 'none' arguments as well.

can you please add an example for the command in the commit message ?


>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>   fabrics.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/fabrics.c b/fabrics.c
> index f5cd2127db05..ad9a15e72070 100644
> --- a/fabrics.c
> +++ b/fabrics.c
> @@ -565,7 +565,7 @@ add_argument(char **argstr, int *max_len, char *arg_str, char *arg)
>   {
>   	int len;
>   
> -	if (arg) {
> +	if (arg && strcmp(arg, "none")) {
>   		len = snprintf(*argstr, *max_len, ",%s=%s", arg_str, arg);
>   		if (len < 0)
>   			return -EINVAL;
> @@ -658,14 +658,14 @@ retry:
>   		return -EINVAL;
>   	p += len;
>   
> -	if (cfg.hostnqn) {
> +	if (cfg.hostnqn && strcmp(cfg.hostnqn, "none")) {
>   		len = sprintf(p, ",hostnqn=%s", cfg.hostnqn);
>   		if (len < 0)
>   			return -EINVAL;
>   		p += len;
>   	}
>   
> -	if (cfg.hostid) {
> +	if (cfg.hostid && strcmp(cfg.hostnqn, "none")) {

c&p bug here ?


>   		len = sprintf(p, ",hostid=%s", cfg.hostid);
>   		if (len < 0)
>   			return -EINVAL;
> @@ -700,7 +700,7 @@ retry:
>   		p += len;
>   	}
>   
> -	if (cfg.host_traddr) {
> +	if (cfg.host_traddr && strcmp(cfg.host_traddr, "none")) {
>   		len = sprintf(p, ",host_traddr=%s", cfg.host_traddr);
>   		if (len < 0)
>   			return -EINVAL;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 2/6] fabrics: support persistent connections to a discovery controller
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 2/6] fabrics: support persistent connections to a discovery controller Sagi Grimberg
@ 2019-05-07  9:22   ` Max Gurtovoy
  0 siblings, 0 replies; 34+ messages in thread
From: Max Gurtovoy @ 2019-05-07  9:22 UTC (permalink / raw)



On 2/23/2019 4:32 AM, Sagi Grimberg wrote:
> Simply don't destroy the discovery controller after getting the
> log pages. Note that persistent connection to a discovery subsystem
> require to pass in a non-zero kato value, so if not provided we
> simply use a default of 30 seconds kato.
>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>

same here for example of the command line.

otherwise looks good,

Reviewed-by: Max Gurtovoy <maxg at mellanox.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 3/6] fabrics: allow user to retrieve discovery log from existing discovery controller
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 3/6] fabrics: allow user to retrieve discovery log from existing " Sagi Grimberg
@ 2019-05-07  9:27   ` Max Gurtovoy
  0 siblings, 0 replies; 34+ messages in thread
From: Max Gurtovoy @ 2019-05-07  9:27 UTC (permalink / raw)



On 2/23/2019 4:32 AM, Sagi Grimberg wrote:
> Simply allow discover to receive the discovery device node name.
> Also centralize extraction of controller instance from the controller
> name to a common helper.
>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>   fabrics.c | 32 ++++++++++++++++++++++----------
>   1 file changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/fabrics.c b/fabrics.c

same here for adding example of the command line.


otherwise looks good,

Reviewed-by: Max Gurtovoy <maxg at mellanox.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option
  2019-02-23  2:32 ` [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option Sagi Grimberg
@ 2019-05-07  9:35   ` Max Gurtovoy
  2019-05-07 17:12     ` James Smart
  0 siblings, 1 reply; 34+ messages in thread
From: Max Gurtovoy @ 2019-05-07  9:35 UTC (permalink / raw)



On 2/23/2019 4:32 AM, Sagi Grimberg wrote:
> Now we are going to have discovery log change events, so
> instead of trying to figure out which NVM subsystem ports
> are already connected, we let the driver do that and
> only suppress the failure messages.
>
> This option will be used by the discovery log change corresponding
> udev rule.
>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>   fabrics.c | 14 +++++++++++---
>   1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/fabrics.c b/fabrics.c
> index 702f928cd8ed..f08fe3f9a0be 100644
> --- a/fabrics.c
> +++ b/fabrics.c
> @@ -66,6 +66,7 @@ static struct config {
>   	int  hdr_digest;
>   	int  data_digest;
>   	bool persistent;
> +	bool quiet;
>   } cfg = { NULL };
>   
>   #define BUF_SIZE		4096
> @@ -216,9 +217,11 @@ static int add_ctrl(const char *argstr)
>   		goto out;
>   	}
>   
> -	if (write(fd, argstr, len) != len) {
> -		fprintf(stderr, "Failed to write to %s: %s\n",
> -			 PATH_NVME_FABRICS, strerror(errno));
> +	ret = write(fd, argstr, len);
> +	if (ret != len) {
> +		if (errno != EALREADY || !cfg.quiet)
> +			fprintf(stderr, "Failed to write to %s: %s\n",
> +				 PATH_NVME_FABRICS, strerror(errno));
>   		ret = -errno;
>   		goto out_close;
>   	}
> @@ -840,6 +843,10 @@ static int connect_ctrls(struct nvmf_disc_rsp_page_hdr *log, int numrec)
>   		/* already connected print message	*/
>   		if (instance == -EALREADY) {
>   			const char *traddr = log->entries[i].traddr;
> +
> +			if (cfg.quiet)
> +				continue;
> +

maybe if(!cfg.quiet) do a fprintf() instead of continue.

(and also example can be added to commit message :) )


>   			fprintf(stderr,
>   				"traddr=%.*s is already connected\n",
>   				space_strip_len(NVMF_TRADDR_SIZE, traddr),
> @@ -996,6 +1003,7 @@ int discover(const char *desc, int argc, char **argv, bool connect)
>   		{"nr-poll-queues",  'P', "LIST", CFG_INT, &cfg.nr_poll_queues,    required_argument, "number of poll queues to use (default 0)" },
>   		{"queue-size",      'Q', "LIST", CFG_INT, &cfg.queue_size,      required_argument, "number of io queue elements to use (default 128)" },
>   		{"persistent",  'p', "LIST", CFG_NONE, &cfg.persistent,  no_argument, "persistent discovery connection" },
> +		{"quiet",       'Q', "LIST", CFG_NONE, &cfg.quiet,  no_argument, "suppress already connected errors" },
>   		{NULL},
>   	};
>   

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 0/6] Support discovery log change events
  2019-05-06 22:38                             ` Arun Easi
@ 2019-05-07 16:47                               ` James Smart
  0 siblings, 0 replies; 34+ messages in thread
From: James Smart @ 2019-05-07 16:47 UTC (permalink / raw)




On 5/6/2019 3:38 PM, Arun Easi wrote:
> Hi James,
>
> On Fri, 26 Apr 2019, 1:14pm, James Smart wrote:
>
>> On 4/26/2019 12:10 PM, Sagi Grimberg wrote:
>>> Given that FC does not need a persistent discovery controller to get the
>>> event, its fine that it will create a temporary discovery controller.
>>> Given that FC and IP do different things, we can separate them. But what
>>> I wanted to understand, is if I export a second subsystem on the same
>>> FC port, will it generate an FC event?
>> Well - FC does have an RSCN notification to cause the rescan of the FC
>> port, if the FC port as the NVME Discovery attribute, but Christoph
>> rejected the 3 patches about a year ago. I'll be reposting it. So the
>> answer should eventually be - yes, it will get an FC event if the
>> subsystem list at the FC port changes.
>>
> So, just wondering, for the case of subsystem addition to an existing FC
> port, is it recommended or mandated in the spec that the target FC port
> send out a RSCN? The RSCN should be from the target FC port, not the
> switch, right? I could not find such a reference in the FC-NVME spec
> version I have (r1.15).
>
> On a related note, from a FC LLDD perspective, then, are they required to
> invoke nvme_fc_rescan_remoteport() upon RSCN reception?
>
> Regards,
> -Arun

There's no requirement, but there is a recommendation. Should be in 
section C.? RSCN should be from the FC port, but it can address the 
fabric and have the fabric deliver it to the recipients.

you need to be looking at at least r1.19 + ammendment 1.

For the FC LLDD perspective - I had pushed the api into the initiator 
transport where the driver can invoke the rescan, but it wasn't picked 
up. I'm about to repost after merging with similar work done with the 
disovery AEN work on the nvmet side.?? Also, for proper 1.19+ammendment 
1 support, I will be adding an LS-receive interface to the transport - 
so that Disconnects and (soon) Disconnect I/O Connections can be passed up.

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none"
  2019-05-07  9:17   ` Max Gurtovoy
@ 2019-05-07 17:04     ` James Smart
  0 siblings, 0 replies; 34+ messages in thread
From: James Smart @ 2019-05-07 17:04 UTC (permalink / raw)




On 5/7/2019 2:17 AM, Max Gurtovoy wrote:
>
> On 2/23/2019 4:32 AM, Sagi Grimberg wrote:
>> As we want to support discovery uevents over different
>> transports, we want to allow the kernel to provide missing
>> information in the form of none and have nvme-cli properly
>> ignore it.
>>
>> One example is the host_traddr. If it is not set (which means
>> that the default route determined the host address) we will
>> want to do the same for newly discovered controllers.
>>
>> So allow users to pass 'none' arguments as well.
>
> can you please add an example for the command in the commit message ?
>

yes, I will in the repost. The real interesting case will come from the 
uevent, which will pass the address argument to the udev event set to 
"none". Therefore the event:
 ? +ACTION=="change", SUBSYSTEM=="nvme", ENV{NVME_EVENT}=="discovery",\
 ? +? ENV{NVME_CTLR_NAME}=="*", ENV{NVME_TRTYPE}=="*", 
ENV{NVME_TRADDR}=="*", \
 ? +? ENV{NVME_TRSVCID}=="*", ENV{NVME_HOST_TRADDR}=="*", \
 ? +? RUN+="/usr/bin/systemctl --no-block start 
nvmf-connect at --device=$env{NVME_CTLR_NAME}\t--transport=$env{NVME_TRTYPE}\t-- 
traddr=$env{NVME_TRADDR}\t--trsvcid=$env{NVME_TRSVCID}\t--host-traddr=$env{NVME_HOST_TRADDR}.service"

will result in:
"/usr/bin/systemctl --no-block start 
nvmf-connect at --device=nvme5\t--transport=none\t-- 
traddr=none\t--trsvcid=none\t--host-traddr=none.service"
or
"/usr/bin/systemctl --no-block start 
nvmf-connect at --device=none\t--transport=fc\t-- 
traddr=nn:aa-bb-cc-dd-00-11-22-33-pn:aa-bb-cc-dd-00-11-22-33\t--trsvcid=none\t--host-traddr=nn:aa-bb-cc-dd-00-11-22-34-pn:aa-bb-cc-dd-00-11-22-34.service"

or similar.


>
>>
>> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
>> ---
>> ? fabrics.c | 8 ++++----
>> ? 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/fabrics.c b/fabrics.c
>> index f5cd2127db05..ad9a15e72070 100644
>> --- a/fabrics.c
>> +++ b/fabrics.c
>> @@ -565,7 +565,7 @@ add_argument(char **argstr, int *max_len, char 
>> *arg_str, char *arg)
>> ? {
>> ????? int len;
>> ? -??? if (arg) {
>> +??? if (arg && strcmp(arg, "none")) {
>> ????????? len = snprintf(*argstr, *max_len, ",%s=%s", arg_str, arg);
>> ????????? if (len < 0)
>> ????????????? return -EINVAL;
>> @@ -658,14 +658,14 @@ retry:
>> ????????? return -EINVAL;
>> ????? p += len;
>> ? -??? if (cfg.hostnqn) {
>> +??? if (cfg.hostnqn && strcmp(cfg.hostnqn, "none")) {
>> ????????? len = sprintf(p, ",hostnqn=%s", cfg.hostnqn);
>> ????????? if (len < 0)
>> ????????????? return -EINVAL;
>> ????????? p += len;
>> ????? }
>> ? -??? if (cfg.hostid) {
>> +??? if (cfg.hostid && strcmp(cfg.hostnqn, "none")) {
>
> c&p bug here ?

Yes. that was there in the original. I thought I fixed that.

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option
  2019-05-07  9:35   ` Max Gurtovoy
@ 2019-05-07 17:12     ` James Smart
  2019-05-07 18:54       ` Max Gurtovoy
  0 siblings, 1 reply; 34+ messages in thread
From: James Smart @ 2019-05-07 17:12 UTC (permalink / raw)




On 5/7/2019 2:35 AM, Max Gurtovoy wrote:
>
> (and also example can be added to commit message :) )
>

wow, all these examples. What fun is that. What's left to the developer 
to go figure out ? :)

-- james

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option
  2019-05-07 17:12     ` James Smart
@ 2019-05-07 18:54       ` Max Gurtovoy
  0 siblings, 0 replies; 34+ messages in thread
From: Max Gurtovoy @ 2019-05-07 18:54 UTC (permalink / raw)



On 5/7/2019 8:12 PM, James Smart wrote:
>
>
> On 5/7/2019 2:35 AM, Max Gurtovoy wrote:
>>
>> (and also example can be added to commit message :) )
>>
>
> wow, all these examples. What fun is that. What's left to the 
> developer to go figure out ? :)

well I'm thinking about non-developers that can get added value from 
good examples.


>
> -- james
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-05-07 18:54 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-23  2:32 [PATCH nvme-cli rfc 0/6] Support discovery log change events Sagi Grimberg
2019-02-23  2:32 ` [PATCH nvme-cli rfc 1/6] fabrics: ignore arguments that pass in "none" Sagi Grimberg
2019-05-07  9:17   ` Max Gurtovoy
2019-05-07 17:04     ` James Smart
2019-02-23  2:32 ` [PATCH nvme-cli rfc 2/6] fabrics: support persistent connections to a discovery controller Sagi Grimberg
2019-05-07  9:22   ` Max Gurtovoy
2019-02-23  2:32 ` [PATCH nvme-cli rfc 3/6] fabrics: allow user to retrieve discovery log from existing " Sagi Grimberg
2019-05-07  9:27   ` Max Gurtovoy
2019-02-23  2:32 ` [PATCH nvme-cli rfc 4/6] fabrics: add --quiet option Sagi Grimberg
2019-05-07  9:35   ` Max Gurtovoy
2019-05-07 17:12     ` James Smart
2019-05-07 18:54       ` Max Gurtovoy
2019-02-23  2:32 ` [PATCH nvme-cli rfc 5/6] fabrics: systemd and udev support for automatic discovery log changes Sagi Grimberg
2019-02-23  2:32 ` [PATCH nvme-cli rfc 6/6] fc: add autoconnect systemd service and udev rule for fc " Sagi Grimberg
2019-02-24 17:17 ` [PATCH nvme-cli rfc 0/6] Support discovery log change events Hannes Reinecke
2019-02-24 22:33   ` Sagi Grimberg
2019-03-09  2:18     ` Sagi Grimberg
2019-03-13 21:04     ` James Smart
2019-03-14  0:00       ` Sagi Grimberg
2019-03-14 20:43         ` James Smart
2019-03-14 20:50           ` James Smart
2019-03-15 12:40             ` Hannes Reinecke
2019-03-15 20:38               ` Sagi Grimberg
2019-04-25 19:10                 ` Sagi Grimberg
2019-04-26 14:19                   ` Hannes Reinecke
2019-04-26 15:46                     ` Sagi Grimberg
2019-04-26 16:17                       ` James Smart
2019-04-26 19:10                         ` Sagi Grimberg
2019-04-26 20:14                           ` James Smart
2019-05-06 22:38                             ` Arun Easi
2019-05-07 16:47                               ` James Smart
2019-04-27 11:05                           ` Hannes Reinecke
2019-04-27 10:53                       ` Hannes Reinecke
2019-03-14 21:41           ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.