All of lore.kernel.org
 help / color / mirror / Atom feed
* Nvmet discovery implementation. Questions and a patch / proposal.
@ 2022-02-28 16:39 Mark Ruijter
  2022-03-01  7:05 ` Hannes Reinecke
  0 siblings, 1 reply; 6+ messages in thread
From: Mark Ruijter @ 2022-02-28 16:39 UTC (permalink / raw)
  To: linux-nvme; +Cc: Sagi Grimberg, Hannes Reinecke

[-- Attachment #1: Type: text/plain, Size: 5772 bytes --]


The current discovery implementation only works after a subsystem has been created and has been linked to a port.
As a result, 'nvme discover' from an initiator will only show a discovery device after the first subsystem has been exported.

The patch that I propose adds these features:
A. Make discovery work immediately after the target port has been configured, without needing to configure subsystems first.
B. Move the discovery_nqn configuration option to the port instead of having the same configuration option duplicated in every subsystem.
C. Allow to enable and disable a target port without needing to unlink target subsystems.

For example, when running discovery on a system with a configured target port, without exported subsystems, this is the result:

root@saturn:/sys/kernel/config/nvmet/subsystems# ls -l
total 0
root@saturn:/sys/kernel/config/nvmet/subsystems# nvme discover -t tcp -a 10.100.10.1 -s 4420
Failed to write to /dev/nvme-fabrics: Connection refused

To make discovery work currently requires something like this:

root@saturn:/sys/kernel/config/nvmet/subsystems# mkdir vol1
root@saturn:/sys/kernel/config/nvmet/subsystems# cd vol1/
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1# echo 1 > attr_allow_any_host 
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1# cd namespaces/
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1/namespaces# mkdir 1
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1/namespaces# cd 1
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1/namespaces/1# echo /dev/nullb0 > device_path 
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1/namespaces/1# echo 1 > enable 
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1/namespaces/1# cd /sys/kernel/config/nvmet/ports/10120/
root@saturn:/sys/kernel/config/nvmet/ports/10120# cd subsystems/
root@saturn:/sys/kernel/config/nvmet/ports/10120/subsystems# ln -s /sys/kernel/config/nvmet/subsystems/vol1 vol1
root@saturn:/sys/kernel/config/nvmet/ports/10120/subsystems# nvme discover -t tcp -a 10.100.10.1 -s 4420

Discovery Log Number of Records 2, Generation counter 2
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  nqn.2014-08.org.nvmexpress.discovery
traddr:  10.100.10.1
sectype: none
=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  vol1
traddr:  10.100.10.1
sectype: none

What is also misleading is that changing the discovery_nqn from a subsystem also changes the discovery_nqn for every other subsystem.
Since the attribute exists in every subsystem, a user could assume that it's only related to the subsystem.

root@saturn:/sys/kernel/config/nvmet/subsystems# cat vol1/attr_discovery_nqn 
nqn.2014-08.org.nvmexpress.discovery
root@saturn:/sys/kernel/config/nvmet/subsystems# echo nqn.2014-08.org.testme.discovery > vol2/attr_discovery_nqn 
root@saturn:/sys/kernel/config/nvmet/subsystems# cat vol1/attr_discovery_nqn 
nqn.2014-08.org.testme.discovery

In my opinion the preferred behavior would be that an initiator can discover a target port *immediately* after it has been configured.
Even when it doesn't have exported subsystems it is already extremely useful to show the discovery device.
This can be used to verify that clients and the targets can establish communication.

Since the discovery_nqn is not uniquely related to a subsystem, it should not be configured as part of a subsystem.
The patch attached to this email illustrates a slightly different discovery implementation.

First configure a target port.
root@saturn:/sys/kernel/config/nvmet/ports/10120# cat addr_traddr 
10.100.10.1
root@saturn:/sys/kernel/config/nvmet/ports/10120# cat addr_trtype 
tcp
root@saturn:/sys/kernel/config/nvmet/ports/10120# cat param_discovery_nqn 
nqn.2014-08.org.nvmexpress.discovery
cat param_enable_target 
0

At this point, even though the target has been configured, discovery still fails because the target port is still disabled.

root@saturn:/sys/kernel/config/nvmet/ports/10120# nvme discover -t tcp -a 10.100.10.1 -s 4420
Failed to write to /dev/nvme-fabrics: Connection refused
root@saturn:/sys/kernel/config/nvmet/ports/10120#

*After enabling the target discovery immediately works.*

root@saturn:/sys/kernel/config/nvmet/ports/10120# echo 1 > param_enable_target 
root@saturn:/sys/kernel/config/nvmet/ports/10120# nvme discover -t tcp -a 10.100.10.1 -s 4420

Discovery Log Number of Records 1, Generation counter 0
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  nqn.2014-08.org.nvmexpress.discovery
traddr:  10.100.10.1
sectype: none

I have removed the discovery_nqn configuration attribute from the subsystems:
root@saturn:/sys/kernel/config/nvmet/subsystems/vol1# ls
allowed_hosts  attr_allow_any_host  attr_cntlid_max  attr_cntlid_min  attr_model  attr_pi_enable  attr_serial  attr_version  namespaces

Maybe the discovery_nqn should be configurable per target port. 
This is not the case for that patch that I have provided. 

For example:
root@saturn:/sys/kernel/config/nvmet/ports# echo "nqn.2014-08.org.test.discovery" > 10120/param_discovery_nqn
root@saturn:/sys/kernel/config/nvmet/ports# cat 10030/param_discovery_nqn 
nqn.2014-08.org.test.discovery

Not sure if there is a real use case for having different discovery nqns for different target ports.

Is this making sense or am I missing something?



[-- Attachment #2: discover_example.patch --]
[-- Type: application/octet-stream, Size: 3773 bytes --]

diff --git drivers/nvme/target/configfs.c drivers/nvme/target/configfs.c
index 091a0ca..ce05023 100644
--- drivers/nvme/target/configfs.c
+++ drivers/nvme/target/configfs.c
@@ -276,6 +276,69 @@ static ssize_t nvmet_param_pi_enable_store(struct config_item *item,
 CONFIGFS_ATTR(nvmet_, param_pi_enable);
 #endif
 
+static ssize_t nvmet_param_enable_target_show(struct config_item *item, char *page)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+        return snprintf(page, PAGE_SIZE, "%d\n",
+                        port->enabled);
+}
+
+static ssize_t nvmet_param_enable_target_store(struct config_item *item,
+                const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	bool val;
+	int ret = 0;
+
+	if (strtobool(page, &val))
+		return -EINVAL;
+	down_write(&nvmet_config_sem);
+	if (val) {
+		if (!port->enabled)
+			ret = nvmet_enable_port(port);
+	} else {
+		if (port->enabled)
+			nvmet_disable_port(port);
+	}
+	up_write(&nvmet_config_sem);
+	if (ret)
+		return -EINVAL;
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_enable_target);
+
+static ssize_t nvmet_param_discovery_nqn_show(struct config_item *item,
+                        char *page)
+{
+        return snprintf(page, PAGE_SIZE, "%s\n",
+                        nvmet_disc_subsys->subsysnqn);
+}
+
+static ssize_t nvmet_param_discovery_nqn_store(struct config_item *item,
+                        const char *page, size_t count)
+{
+        char *subsysnqn;
+        int len;
+
+        len = strcspn(page, "\n");
+        if (!len)
+                return -EINVAL;
+
+        subsysnqn = kmemdup_nul(page, len, GFP_KERNEL);
+        if (!subsysnqn)
+                return -ENOMEM;
+
+        down_write(&nvmet_config_sem);
+        kfree(nvmet_disc_subsys->subsysnqn);
+        nvmet_disc_subsys->subsysnqn = subsysnqn;
+        up_write(&nvmet_config_sem);
+
+        return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_discovery_nqn);
+
 static ssize_t nvmet_addr_trtype_show(struct config_item *item,
 		char *page)
 {
@@ -1233,44 +1296,6 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_model);
 
-static ssize_t nvmet_subsys_attr_discovery_nqn_show(struct config_item *item,
-			char *page)
-{
-	return snprintf(page, PAGE_SIZE, "%s\n",
-			nvmet_disc_subsys->subsysnqn);
-}
-
-static ssize_t nvmet_subsys_attr_discovery_nqn_store(struct config_item *item,
-			const char *page, size_t count)
-{
-	struct nvmet_subsys *subsys = to_subsys(item);
-	char *subsysnqn;
-	int len;
-
-	len = strcspn(page, "\n");
-	if (!len)
-		return -EINVAL;
-
-	subsysnqn = kmemdup_nul(page, len, GFP_KERNEL);
-	if (!subsysnqn)
-		return -ENOMEM;
-
-	/*
-	 * The discovery NQN must be different from subsystem NQN.
-	 */
-	if (!strcmp(subsysnqn, subsys->subsysnqn)) {
-		kfree(subsysnqn);
-		return -EBUSY;
-	}
-	down_write(&nvmet_config_sem);
-	kfree(nvmet_disc_subsys->subsysnqn);
-	nvmet_disc_subsys->subsysnqn = subsysnqn;
-	up_write(&nvmet_config_sem);
-
-	return count;
-}
-CONFIGFS_ATTR(nvmet_subsys_, attr_discovery_nqn);
-
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
 						char *page)
@@ -1300,7 +1325,6 @@ static struct configfs_attribute *nvmet_subsys_attrs[] = {
 	&nvmet_subsys_attr_attr_cntlid_min,
 	&nvmet_subsys_attr_attr_cntlid_max,
 	&nvmet_subsys_attr_attr_model,
-	&nvmet_subsys_attr_attr_discovery_nqn,
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	&nvmet_subsys_attr_attr_pi_enable,
 #endif
@@ -1610,6 +1634,8 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	&nvmet_attr_param_pi_enable,
 #endif
+	&nvmet_attr_param_discovery_nqn,
+	&nvmet_attr_param_enable_target,
 	NULL,
 };
 

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: Nvmet discovery implementation. Questions and a patch / proposal.
  2022-02-28 16:39 Nvmet discovery implementation. Questions and a patch / proposal Mark Ruijter
@ 2022-03-01  7:05 ` Hannes Reinecke
  2022-03-01 12:50   ` Mark Ruijter
  2022-03-13 21:25   ` Sagi Grimberg
  0 siblings, 2 replies; 6+ messages in thread
From: Hannes Reinecke @ 2022-03-01  7:05 UTC (permalink / raw)
  To: Mark Ruijter, linux-nvme; +Cc: Sagi Grimberg

On 2/28/22 17:39, Mark Ruijter wrote:
> 
> The current discovery implementation only works after a subsystem has been created and has been linked to a port.
> As a result, 'nvme discover' from an initiator will only show a discovery device after the first subsystem has
> been exported.
> 
> The patch that I propose adds these features:
> A. Make discovery work immediately after the target port has been configured, without needing to configure
>    subsystems first.
> B. Move the discovery_nqn configuration option to the port instead of having the same configuration option
>    duplicated in every subsystem.
> C. Allow to enable and disable a target port without needing to unlink target subsystems.
> 
The current nvmet discovery implementation was designed to the original 
NVMe spec, where a discovery controller could only return informations 
about the attached subsystems.
But with the recent changes to the spec a discovery controller can now 
return information about itself (ie which port it's running on), and so
it does makes sense to have it enabled even with no subsystems attached.

But: the current implementation only has a _single_ discovery subsystem 
(nvmet_disc_subsys), which is shared across all configured subsystems.
And there's a filter when generating the discovery log page such that 
only the information for the current subsystem is returned (cf 
nvmet_execute_disc_get_log_page()).
So in effect each subsystem has its own discovery subsystem.

Moving the discovery nqn to the port will only confuse matters, as then 
it's quite unclear what will happen if you change the discovery 
subsystem nqn on a per-port basis; with the current design you would
change it for _all_ discovery subsystem ...

So if you want to do something like this, you'd need to change the 
implementation to have a discovery subsystem per port, and also change 
the logic in drivers/nvme/target/discovery.c such that the discovery
subsystems will return informations for _all_ configured subsystems.

It's actually something I wanted to do, too, as I didn't like the 
current implementation :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Nvmet discovery implementation. Questions and a patch / proposal.
  2022-03-01  7:05 ` Hannes Reinecke
@ 2022-03-01 12:50   ` Mark Ruijter
  2022-03-13 21:25   ` Sagi Grimberg
  1 sibling, 0 replies; 6+ messages in thread
From: Mark Ruijter @ 2022-03-01 12:50 UTC (permalink / raw)
  To: Hannes Reinecke, linux-nvme; +Cc: Sagi Grimberg

[-- Attachment #1: Type: text/plain, Size: 6442 bytes --]

>> See inline,

    But: the current implementation only has a _single_ discovery subsystem 
    (nvmet_disc_subsys), which is shared across all configured subsystems.

>> The patch that I provided (slightly improved patch attached) doesn't change this?
>> It merely removes the the attr_discovery_nqn the subsystems and adds it to the ports.
>> Instead of having an attr_discovery_nqn in every subsystem, this changes it to having the attribute in the port section.
>> Maybe the discovery nqn should be moved to /sys/kernel/config/nvmet/discovery/attr_discovery_nqn.
>> This would make it clear that it is a global setting.

    And there's a filter when generating the discovery log page such that 
    only the information for the current subsystem is returned (cf 
    nvmet_execute_disc_get_log_page()).
    So in effect each subsystem has its own discovery subsystem.

>> Good to know. As far as I can tell this still works with the change?

    Moving the discovery nqn to the port will only confuse matters, as then 
    it's quite unclear what will happen if you change the discovery 
    subsystem nqn on a per-port basis; with the current design you would
    change it for _all_ discovery subsystem ...

>> Nothing changes in this respect when you use the attached patch?
>> When changing the param_discovery_nqn attribute of the port, it still changes _all_discovery subsystems for all ports & subsystems.

>> Note that the current code tries to prevent the discovery nqn to be identical to the subsystem nqn.
>> However, the check is flawed since when you change it from another subsystem you can still end up with collisions.

Correct behavior:         
root@saturn:/sys/kernel/config/nvmet/subsystems# echo "nqn.2020-11.com.somesys:uuid:ubuntu-vg--ubuntu-lv1" > nqn.2020-11.com.somesys\:uuid\:ubuntu-vg--ubuntu-lv1/attr_discovery_nqn 
-bash: echo: write error: Device or resource busy

Creating a collision by using another subsystem:
root@saturn:/sys/kernel/config/nvmet/subsystems# echo "nqn.2020-11.com.somesys:uuid:ubuntu-vg--ubuntu-lv1" > nqn.2020-11.com.somesys\:uuid\:ubuntu-vg-ubuntu-lv2/attr_discovery_nqn 
root@saturn:/sys/kernel/config/nvmet/subsystems#

>> The change I made does not solve this problem. But it should be fairly easy to check all existing subsystems nqns for a conflicting name.

>> This example illustrates that with the attached patch changing the discovery_nqn in one port also changes it for every subsystem and other port.

root@saturn:/# nvme discover -t tcp -a 172.16.50.151 -s 4420 

Discovery Log Number of Records 2, Generation counter 6
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  10030
trsvcid: 4420
subnqn:  nqn.2014-08.org.nvmexpress.discovery
traddr:  172.16.50.151
sectype: none
=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  10030
trsvcid: 4420
subnqn:  nqn.2020-11.com.somesys:uuid:ubuntu-vg--ubuntu-lv
traddr:  172.16.50.151
sectype: none
root@saturn:/# nvme discover -t tcp -a 10.100.10.1 -s 4420 

Discovery Log Number of Records 2, Generation counter 6
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  nqn.2014-08.org.nvmexpress.discovery
traddr:  10.100.10.1
sectype: none
=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  nqn.2020-11.com.somesys:uuid:ubuntu-vg--ubuntu-lv
traddr:  10.100.10.1
sectype: none
root@saturn:/# echo nqn.2014-08.org.test.discovery >/sys/kernel/config/nvmet/ports/10030/
addr_adrfam             addr_treq               addr_trtype             param_discovery_nqn     param_inline_data_size  referrals/
addr_traddr             addr_trsvcid            ana_groups/             param_enable_target     param_pi_enable         subsystems/
root@saturn:/# echo nqn.2014-08.org.test.discovery >/sys/kernel/config/nvmet/ports/10030/
addr_adrfam             addr_treq               addr_trtype             param_discovery_nqn     param_inline_data_size  referrals/
addr_traddr             addr_trsvcid            ana_groups/             param_enable_target     param_pi_enable         subsystems/
root@saturn:/# echo nqn.2014-08.org.test.discovery >/sys/kernel/config/nvmet/ports/10030/param_discovery_nqn 
root@saturn:/# nvme discover -t tcp -a 10.100.10.1 -s 4420 

Discovery Log Number of Records 2, Generation counter 6
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  nqn.2014-08.org.test.discovery
traddr:  10.100.10.1
sectype: none
=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  10120
trsvcid: 4420
subnqn:  nqn.2020-11.com.somesys:uuid:ubuntu-vg--ubuntu-lv
traddr:  10.100.10.1
sectype: none
root@saturn:/# nvme discover -t tcp -a 172.16.50.151 -s 4420 

Discovery Log Number of Records 2, Generation counter 6
=====Discovery Log Entry 0======
trtype:  tcp
adrfam:  ipv4
subtype: unrecognized
treq:    not specified, sq flow control disable supported
portid:  10030
trsvcid: 4420
subnqn:  nqn.2014-08.org.test.discovery
traddr:  172.16.50.151
sectype: none
=====Discovery Log Entry 1======
trtype:  tcp
adrfam:  ipv4
subtype: nvme subsystem
treq:    not specified, sq flow control disable supported
portid:  10030
trsvcid: 4420
subnqn:  nqn.2020-11.com.somesys:uuid:ubuntu-vg--ubuntu-lv
traddr:  172.16.50.151
sectype: none
root@saturn:/#


    So if you want to do something like this, you'd need to change the 
    implementation to have a discovery subsystem per port, and also change 
    the logic in drivers/nvme/target/discovery.c such that the discovery
    subsystems will return informations for _all_ configured subsystems.

>> I wonder if we even need a separate discovery subsystem per port. 
>> It adds code complexity for a feature that may not be very useful?

Thanks,

Mark



[-- Attachment #2: discover_example_v2.patch --]
[-- Type: application/octet-stream, Size: 4136 bytes --]

diff --git drivers/nvme/target/configfs.c drivers/nvme/target/configfs.c
index 091a0ca..8aa7a42 100644
--- drivers/nvme/target/configfs.c
+++ drivers/nvme/target/configfs.c
@@ -276,6 +276,69 @@ static ssize_t nvmet_param_pi_enable_store(struct config_item *item,
 CONFIGFS_ATTR(nvmet_, param_pi_enable);
 #endif
 
+static ssize_t nvmet_param_enable_target_show(struct config_item *item, char *page)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+        return snprintf(page, PAGE_SIZE, "%d\n",
+                        port->enabled);
+}
+
+static ssize_t nvmet_param_enable_target_store(struct config_item *item,
+                const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	bool val;
+	int ret = 0;
+
+	if (strtobool(page, &val))
+		return -EINVAL;
+	down_write(&nvmet_config_sem);
+	if (val) {
+		if (!port->enabled)
+			ret = nvmet_enable_port(port);
+	} else {
+		if (port->enabled)
+			nvmet_disable_port(port);
+	}
+	up_write(&nvmet_config_sem);
+	if (ret)
+		return -EINVAL;
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_enable_target);
+
+static ssize_t nvmet_param_discovery_nqn_show(struct config_item *item,
+                        char *page)
+{
+        return snprintf(page, PAGE_SIZE, "%s\n",
+                        nvmet_disc_subsys->subsysnqn);
+}
+
+static ssize_t nvmet_param_discovery_nqn_store(struct config_item *item,
+                        const char *page, size_t count)
+{
+        char *subsysnqn;
+        int len;
+
+        len = strcspn(page, "\n");
+        if (!len)
+                return -EINVAL;
+
+        subsysnqn = kmemdup_nul(page, len, GFP_KERNEL);
+        if (!subsysnqn)
+                return -ENOMEM;
+
+        down_write(&nvmet_config_sem);
+        kfree(nvmet_disc_subsys->subsysnqn);
+        nvmet_disc_subsys->subsysnqn = subsysnqn;
+        up_write(&nvmet_config_sem);
+
+        return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_discovery_nqn);
+
 static ssize_t nvmet_addr_trtype_show(struct config_item *item,
 		char *page)
 {
@@ -827,9 +890,11 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent,
 	}
 
 	if (list_empty(&port->subsystems)) {
-		ret = nvmet_enable_port(port);
-		if (ret)
-			goto out_free_link;
+		if (!port->enabled) {
+			ret = nvmet_enable_port(port);
+			if (ret)
+				goto out_free_link;
+		}
 	}
 
 	list_add_tail(&link->entry, &port->subsystems);
@@ -1233,44 +1298,6 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_model);
 
-static ssize_t nvmet_subsys_attr_discovery_nqn_show(struct config_item *item,
-			char *page)
-{
-	return snprintf(page, PAGE_SIZE, "%s\n",
-			nvmet_disc_subsys->subsysnqn);
-}
-
-static ssize_t nvmet_subsys_attr_discovery_nqn_store(struct config_item *item,
-			const char *page, size_t count)
-{
-	struct nvmet_subsys *subsys = to_subsys(item);
-	char *subsysnqn;
-	int len;
-
-	len = strcspn(page, "\n");
-	if (!len)
-		return -EINVAL;
-
-	subsysnqn = kmemdup_nul(page, len, GFP_KERNEL);
-	if (!subsysnqn)
-		return -ENOMEM;
-
-	/*
-	 * The discovery NQN must be different from subsystem NQN.
-	 */
-	if (!strcmp(subsysnqn, subsys->subsysnqn)) {
-		kfree(subsysnqn);
-		return -EBUSY;
-	}
-	down_write(&nvmet_config_sem);
-	kfree(nvmet_disc_subsys->subsysnqn);
-	nvmet_disc_subsys->subsysnqn = subsysnqn;
-	up_write(&nvmet_config_sem);
-
-	return count;
-}
-CONFIGFS_ATTR(nvmet_subsys_, attr_discovery_nqn);
-
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 static ssize_t nvmet_subsys_attr_pi_enable_show(struct config_item *item,
 						char *page)
@@ -1300,7 +1327,6 @@ static struct configfs_attribute *nvmet_subsys_attrs[] = {
 	&nvmet_subsys_attr_attr_cntlid_min,
 	&nvmet_subsys_attr_attr_cntlid_max,
 	&nvmet_subsys_attr_attr_model,
-	&nvmet_subsys_attr_attr_discovery_nqn,
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	&nvmet_subsys_attr_attr_pi_enable,
 #endif
@@ -1610,6 +1636,8 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	&nvmet_attr_param_pi_enable,
 #endif
+	&nvmet_attr_param_discovery_nqn,
+	&nvmet_attr_param_enable_target,
 	NULL,
 };
 

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: Nvmet discovery implementation. Questions and a patch / proposal.
  2022-03-01  7:05 ` Hannes Reinecke
  2022-03-01 12:50   ` Mark Ruijter
@ 2022-03-13 21:25   ` Sagi Grimberg
  2022-03-14  6:59     ` Hannes Reinecke
  1 sibling, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2022-03-13 21:25 UTC (permalink / raw)
  To: Hannes Reinecke, Mark Ruijter, linux-nvme



On 3/1/22 09:05, Hannes Reinecke wrote:
> On 2/28/22 17:39, Mark Ruijter wrote:
>>
>> The current discovery implementation only works after a subsystem has 
>> been created and has been linked to a port.
>> As a result, 'nvme discover' from an initiator will only show a 
>> discovery device after the first subsystem has
>> been exported.
>>
>> The patch that I propose adds these features:
>> A. Make discovery work immediately after the target port has been 
>> configured, without needing to configure
>>    subsystems first.
>> B. Move the discovery_nqn configuration option to the port instead of 
>> having the same configuration option
>>    duplicated in every subsystem.
>> C. Allow to enable and disable a target port without needing to unlink 
>> target subsystems.
>>
> The current nvmet discovery implementation was designed to the original 
> NVMe spec, where a discovery controller could only return informations 
> about the attached subsystems.
> But with the recent changes to the spec a discovery controller can now 
> return information about itself (ie which port it's running on), and so
> it does makes sense to have it enabled even with no subsystems attached.
> 
> But: the current implementation only has a _single_ discovery subsystem 
> (nvmet_disc_subsys), which is shared across all configured subsystems.
> And there's a filter when generating the discovery log page such that 
> only the information for the current subsystem is returned (cf 
> nvmet_execute_disc_get_log_page()).
> So in effect each subsystem has its own discovery subsystem.
> 
> Moving the discovery nqn to the port will only confuse matters, as then 
> it's quite unclear what will happen if you change the discovery 
> subsystem nqn on a per-port basis; with the current design you would
> change it for _all_ discovery subsystem ...
> 
> So if you want to do something like this, you'd need to change the 
> implementation to have a discovery subsystem per port, and also change 
> the logic in drivers/nvme/target/discovery.c such that the discovery
> subsystems will return informations for _all_ configured subsystems.
> 
> It's actually something I wanted to do, too, as I didn't like the 
> current implementation :-)

Why aren't we just creating the discovery subsystem as a static configfs
subsystems and just link it to different ports?

The port listener on first linkage is a different matter afaict, don't
see a strong argument to make the port listener without any subsystems
exported anyways... maybe if discovery log change event is being used...


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Nvmet discovery implementation. Questions and a patch / proposal.
  2022-03-13 21:25   ` Sagi Grimberg
@ 2022-03-14  6:59     ` Hannes Reinecke
  2022-03-14  9:59       ` Mark Ruijter
  0 siblings, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2022-03-14  6:59 UTC (permalink / raw)
  To: Sagi Grimberg, Mark Ruijter, linux-nvme

On 3/13/22 22:25, Sagi Grimberg wrote:
> 
> 
> On 3/1/22 09:05, Hannes Reinecke wrote:
>> On 2/28/22 17:39, Mark Ruijter wrote:
>>>
>>> The current discovery implementation only works after a subsystem has 
>>> been created and has been linked to a port.
>>> As a result, 'nvme discover' from an initiator will only show a 
>>> discovery device after the first subsystem has
>>> been exported.
>>>
>>> The patch that I propose adds these features:
>>> A. Make discovery work immediately after the target port has been 
>>> configured, without needing to configure
>>>    subsystems first.
>>> B. Move the discovery_nqn configuration option to the port instead of 
>>> having the same configuration option
>>>    duplicated in every subsystem.
>>> C. Allow to enable and disable a target port without needing to 
>>> unlink target subsystems.
>>>
>> The current nvmet discovery implementation was designed to the 
>> original NVMe spec, where a discovery controller could only return 
>> informations about the attached subsystems.
>> But with the recent changes to the spec a discovery controller can now 
>> return information about itself (ie which port it's running on), and so
>> it does makes sense to have it enabled even with no subsystems attached.
>>
>> But: the current implementation only has a _single_ discovery 
>> subsystem (nvmet_disc_subsys), which is shared across all configured 
>> subsystems.
>> And there's a filter when generating the discovery log page such that 
>> only the information for the current subsystem is returned (cf 
>> nvmet_execute_disc_get_log_page()).
>> So in effect each subsystem has its own discovery subsystem.
>>
>> Moving the discovery nqn to the port will only confuse matters, as 
>> then it's quite unclear what will happen if you change the discovery 
>> subsystem nqn on a per-port basis; with the current design you would
>> change it for _all_ discovery subsystem ...
>>
>> So if you want to do something like this, you'd need to change the 
>> implementation to have a discovery subsystem per port, and also change 
>> the logic in drivers/nvme/target/discovery.c such that the discovery
>> subsystems will return informations for _all_ configured subsystems.
>>
>> It's actually something I wanted to do, too, as I didn't like the 
>> current implementation :-)
> 
> Why aren't we just creating the discovery subsystem as a static configfs
> subsystems and just link it to different ports?
> 
We could be doing that, but that would imply a change in behaviour.
Currently we have one distinct discovery controller per subsystem.
Switching to a static configfs discovery subsystem would inevitably 
result in a _single_ discovery subsystem for all configured subsystems.
Not that I'm against it (quite the contrary), but it should be mentioned.

> The port listener on first linkage is a different matter afaict, don't
> see a strong argument to make the port listener without any subsystems
> exported anyways... maybe if discovery log change event is being used...

Actually, we can just leave the existing mechanism as-is; ports would 
need to be linked explicitly to the discovery subsystem.
Reasoning: Not every port needs to present a discovery subsystem, and we 
might want to have ports which will _only_ present the discovery subsystem.

Hmm. Should be pretty easy to code ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Nvmet discovery implementation. Questions and a patch / proposal.
  2022-03-14  6:59     ` Hannes Reinecke
@ 2022-03-14  9:59       ` Mark Ruijter
  0 siblings, 0 replies; 6+ messages in thread
From: Mark Ruijter @ 2022-03-14  9:59 UTC (permalink / raw)
  To: Hannes Reinecke, Sagi Grimberg, linux-nvme

  
>    Actually, we can just leave the existing mechanism as-is; ports would 
>   need to be linked explicitly to the discovery subsystem.
>    Reasoning: Not every port needs to present a discovery subsystem, and we 
 >   might want to have ports which will _only_ present the discovery subsystem.

This makes sense to me since it's a flexible solution.

Maybe slightly off-topic but somewhat related:
Let's assume that a user wants to temporarily disable initiators to use a certain target.
For example, when working on a switch or cabling.
 
The older (IET/SCST and so on) SCSI targets allow to enable/disable a target port using a single attribute.
For example:
/sys/kernel/scst_tgt/targets/qla2x00t/21:00:00:0e:1e:29:a0:70 # cat enabled 
1

The current nvmet target implementation requires to unlink all the subsystems when you merely want to disable a port temporarily.
Wouldn't it be much easier if enabling / disabling a port is also possible without unlinking the subsystems?
The second patch I provided allows this without breaking enabling the port when the first subsystem is linked.

Thanks,

Mark

On 14/03/2022, 08:00, "Hannes Reinecke" <hare@suse.de> wrote:

    On 3/13/22 22:25, Sagi Grimberg wrote:
    > 
    > 
    > On 3/1/22 09:05, Hannes Reinecke wrote:
    >> On 2/28/22 17:39, Mark Ruijter wrote:
    >>>
    >>> The current discovery implementation only works after a subsystem has 
    >>> been created and has been linked to a port.
    >>> As a result, 'nvme discover' from an initiator will only show a 
    >>> discovery device after the first subsystem has
    >>> been exported.
    >>>
    >>> The patch that I propose adds these features:
    >>> A. Make discovery work immediately after the target port has been 
    >>> configured, without needing to configure
    >>>    subsystems first.
    >>> B. Move the discovery_nqn configuration option to the port instead of 
    >>> having the same configuration option
    >>>    duplicated in every subsystem.
    >>> C. Allow to enable and disable a target port without needing to 
    >>> unlink target subsystems.
    >>>
    >> The current nvmet discovery implementation was designed to the 
    >> original NVMe spec, where a discovery controller could only return 
    >> informations about the attached subsystems.
    >> But with the recent changes to the spec a discovery controller can now 
    >> return information about itself (ie which port it's running on), and so
    >> it does makes sense to have it enabled even with no subsystems attached.
    >>
    >> But: the current implementation only has a _single_ discovery 
    >> subsystem (nvmet_disc_subsys), which is shared across all configured 
    >> subsystems.
    >> And there's a filter when generating the discovery log page such that 
    >> only the information for the current subsystem is returned (cf 
    >> nvmet_execute_disc_get_log_page()).
    >> So in effect each subsystem has its own discovery subsystem.
    >>
    >> Moving the discovery nqn to the port will only confuse matters, as 
    >> then it's quite unclear what will happen if you change the discovery 
    >> subsystem nqn on a per-port basis; with the current design you would
    >> change it for _all_ discovery subsystem ...
    >>
    >> So if you want to do something like this, you'd need to change the 
    >> implementation to have a discovery subsystem per port, and also change 
    >> the logic in drivers/nvme/target/discovery.c such that the discovery
    >> subsystems will return informations for _all_ configured subsystems.
    >>
    >> It's actually something I wanted to do, too, as I didn't like the 
    >> current implementation :-)
    > 
    > Why aren't we just creating the discovery subsystem as a static configfs
    > subsystems and just link it to different ports?
    > 
    We could be doing that, but that would imply a change in behaviour.
    Currently we have one distinct discovery controller per subsystem.
    Switching to a static configfs discovery subsystem would inevitably 
    result in a _single_ discovery subsystem for all configured subsystems.
    Not that I'm against it (quite the contrary), but it should be mentioned.

    > The port listener on first linkage is a different matter afaict, don't
    > see a strong argument to make the port listener without any subsystems
    > exported anyways... maybe if discovery log change event is being used...

    Actually, we can just leave the existing mechanism as-is; ports would 
    need to be linked explicitly to the discovery subsystem.
    Reasoning: Not every port needs to present a discovery subsystem, and we 
    might want to have ports which will _only_ present the discovery subsystem.

    Hmm. Should be pretty easy to code ...

    Cheers,

    Hannes
    -- 
    Dr. Hannes Reinecke                Kernel Storage Architect
    hare@suse.de                              +49 911 74053 688
    SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
    HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-03-14  9:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-28 16:39 Nvmet discovery implementation. Questions and a patch / proposal Mark Ruijter
2022-03-01  7:05 ` Hannes Reinecke
2022-03-01 12:50   ` Mark Ruijter
2022-03-13 21:25   ` Sagi Grimberg
2022-03-14  6:59     ` Hannes Reinecke
2022-03-14  9:59       ` Mark Ruijter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.