All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCHv2 0/3] nvme: queue_if_no_path functionality
@ 2020-10-05  9:42 Hannes Reinecke
  2020-10-05  9:42 ` [PATCH 1/3] nvme-mpath: delete disk after last connection Hannes Reinecke
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Hannes Reinecke @ 2020-10-05  9:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch, Hannes Reinecke

Hi all,

this is a patchset based on Keiths original patch for restoring pre-fabrics
behaviour when the last path to a multipath device is removed.
Originally, the nvme device had been removed once the underlying hardware
had been removed. With the introduction of multipath support things changed;
it's now the 'CMIC' bit in the controller identification which controls the
behaviour. If it's set to non-zero, the device is retained even if the hardware
is removed. While this is okay for fabrics (as we can manually connect and
disconnect the devices), for nvme-pci this means that PCI hotplug ceases
to work as the device is never removed, and when reinserting the hardware
a new nvme device is created.
This patchset introduces a 'queue_if_no_path' flag to control the handling of
the last path; it's set for fabrics to retain the current functionality,
but unset for PCI to revert to the original, pre-fabrics behaviour.

Hannes Reinecke (2):
  nvme: add 'queue_if_no_path' semantics
  nvme: add 'queue_if_no_path' sysfs attribute

Keith Busch (1):
  nvme-mpath: delete disk after last connection

 drivers/nvme/host/core.c      | 42 +++++++++++++++++++++++++++++++++++++++++-
 drivers/nvme/host/multipath.c |  5 ++++-
 drivers/nvme/host/nvme.h      |  3 ++-
 3 files changed, 47 insertions(+), 3 deletions(-)

-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] nvme-mpath: delete disk after last connection
  2020-10-05  9:42 [RFC PATCHv2 0/3] nvme: queue_if_no_path functionality Hannes Reinecke
@ 2020-10-05  9:42 ` Hannes Reinecke
  2020-10-05 11:27   ` Christoph Hellwig
  2020-10-05  9:42 ` [PATCH 2/3] nvme: add 'queue_if_no_path' semantics Hannes Reinecke
  2020-10-05  9:42 ` [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute Hannes Reinecke
  2 siblings, 1 reply; 9+ messages in thread
From: Hannes Reinecke @ 2020-10-05  9:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Keith Busch, linux-nvme, Sagi Grimberg, Keith Busch

From: Keith Busch <kbusch@kernel.org>

The multipath code currently deletes the disk only after all references
to it are dropped rather than when the last path to that disk is lost.
This has been reported to cause problems with some usage, like MD RAID.

Delete the disk when the last path is gone. This is the same behavior we
currently have with non-multipathed nvme devices.

The following is just a simple example that demonstrates what is currently
observed using a simple nvme loop back (loop setup file not shown):

 # nvmetcli restore loop.json
 [   31.156452] nvmet: adding nsid 1 to subsystem testnqn1
 [   31.159140] nvmet: adding nsid 1 to subsystem testnqn2

 # nvme connect -t loop -n testnqn1 -q hostnqn
 [   36.866302] nvmet: creating controller 1 for subsystem testnqn1 for NQN hostnqn.
 [   36.872926] nvme nvme3: new ctrl: "testnqn1"

 # nvme connect -t loop -n testnqn1 -q hostnqn
 [   38.227186] nvmet: creating controller 2 for subsystem testnqn1 for NQN hostnqn.
 [   38.234450] nvme nvme4: new ctrl: "testnqn1"

 # nvme connect -t loop -n testnqn2 -q hostnqn
 [   43.902761] nvmet: creating controller 3 for subsystem testnqn2 for NQN hostnqn.
 [   43.907401] nvme nvme5: new ctrl: "testnqn2"

 # nvme connect -t loop -n testnqn2 -q hostnqn
 [   44.627689] nvmet: creating controller 4 for subsystem testnqn2 for NQN hostnqn.
 [   44.641773] nvme nvme6: new ctrl: "testnqn2"

 # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/nvme3n1 /dev/nvme5n1
 [   53.497038] md/raid1:md0: active with 2 out of 2 mirrors
 [   53.501717] md0: detected capacity change from 0 to 66060288

 # cat /proc/mdstat
 Personalities : [raid1]
 md0 : active raid1 nvme5n1[1] nvme3n1[0]
       64512 blocks super 1.2 [2/2] [UU]

Now delete all paths to one of the namespaces:

 # echo 1 > /sys/class/nvme/nvme3/delete_controller
 # echo 1 > /sys/class/nvme/nvme4/delete_controller

We have no path, but mdstat says:

 # cat /proc/mdstat
 Personalities : [raid1]
 md0 : active (auto-read-only) raid1 nvme5n1[1]
       64512 blocks super 1.2 [2/1] [_U]

And this is reported to cause a problem.

With the proposed patch, the following messages appear:

 [  227.516807] md/raid1:md0: Disk failure on nvme3n1, disabling device.
 [  227.516807] md/raid1:md0: Operation continuing on 1 devices.

And mdstat shows only the viable members:

 # cat /proc/mdstat
 Personalities : [raid1]
 md0 : active (auto-read-only) raid1 nvme5n1[1]
       64512 blocks super 1.2 [2/1] [_U]

Reported-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/core.c      | 3 ++-
 drivers/nvme/host/multipath.c | 1 -
 drivers/nvme/host/nvme.h      | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 385b10317873..af950d58c562 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -474,7 +474,8 @@ static void nvme_free_ns_head(struct kref *ref)
 	struct nvme_ns_head *head =
 		container_of(ref, struct nvme_ns_head, ref);
 
-	nvme_mpath_remove_disk(head);
+	if (head->disk)
+		put_disk(head->disk);
 	ida_simple_remove(&head->subsys->ns_ida, head->instance);
 	cleanup_srcu_struct(&head->srcu);
 	nvme_put_subsystem(head->subsys);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 74896be40c17..55045291b4de 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -697,7 +697,6 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 		 */
 		head->disk->queue = NULL;
 	}
-	put_disk(head->disk);
 }
 
 int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 566776100126..b6180bb3361d 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -670,7 +670,7 @@ static inline void nvme_mpath_check_last_path(struct nvme_ns *ns)
 	struct nvme_ns_head *head = ns->head;
 
 	if (head->disk && list_empty(&head->list))
-		kblockd_schedule_work(&head->requeue_work);
+		nvme_mpath_remove_disk(head);
 }
 
 static inline void nvme_trace_bio_complete(struct request *req,
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] nvme: add 'queue_if_no_path' semantics
  2020-10-05  9:42 [RFC PATCHv2 0/3] nvme: queue_if_no_path functionality Hannes Reinecke
  2020-10-05  9:42 ` [PATCH 1/3] nvme-mpath: delete disk after last connection Hannes Reinecke
@ 2020-10-05  9:42 ` Hannes Reinecke
  2020-10-05 11:29   ` Christoph Hellwig
  2020-10-05  9:42 ` [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute Hannes Reinecke
  2 siblings, 1 reply; 9+ messages in thread
From: Hannes Reinecke @ 2020-10-05  9:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch, Hannes Reinecke

Instead of reverting the handling of the 'last' path for all
setups we should be restricting it to the non-fabrics case.
So add a 'queue_if_no_path' flag which will switch to the
current behaviour; disabling this flag will revert to the
original (pre-fabrics) behaviour.
And set this flag per default for fabrics.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c      | 7 ++++++-
 drivers/nvme/host/multipath.c | 4 ++++
 drivers/nvme/host/nvme.h      | 1 +
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index af950d58c562..79b88b4c448f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -474,8 +474,11 @@ static void nvme_free_ns_head(struct kref *ref)
 	struct nvme_ns_head *head =
 		container_of(ref, struct nvme_ns_head, ref);
 
-	if (head->disk)
+	if (head->disk) {
+		if (test_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags))
+			nvme_mpath_remove_disk(head);
 		put_disk(head->disk);
+	}
 	ida_simple_remove(&head->subsys->ns_ida, head->instance);
 	cleanup_srcu_struct(&head->srcu);
 	nvme_put_subsystem(head->subsys);
@@ -3686,6 +3689,8 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
 	head->subsys = ctrl->subsys;
 	head->ns_id = nsid;
 	head->ids = *ids;
+	if (ctrl->ops->flags & NVME_F_FABRICS)
+		set_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags);
 	kref_init(&head->ref);
 
 	ret = __nvme_check_ids(ctrl->subsys, head);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 55045291b4de..05c4fbc35bb2 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -682,6 +682,10 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
+	if (test_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags)) {
+		kblockd_schedule_work(&head->requeue_work);
+		return;
+	}
 	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
 	blk_set_queue_dying(head->disk->queue);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index b6180bb3361d..baf4a84918d8 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -411,6 +411,7 @@ struct nvme_ns_head {
 	struct mutex		lock;
 	unsigned long		flags;
 #define NVME_NSHEAD_DISK_LIVE	0
+#define NVME_NSHEAD_QUEUE_IF_NO_PATH 1
 	struct nvme_ns __rcu	*current_path[];
 #endif
 };
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute
  2020-10-05  9:42 [RFC PATCHv2 0/3] nvme: queue_if_no_path functionality Hannes Reinecke
  2020-10-05  9:42 ` [PATCH 1/3] nvme-mpath: delete disk after last connection Hannes Reinecke
  2020-10-05  9:42 ` [PATCH 2/3] nvme: add 'queue_if_no_path' semantics Hannes Reinecke
@ 2020-10-05  9:42 ` Hannes Reinecke
  2020-10-05 11:31   ` Christoph Hellwig
  2020-10-05 11:38   ` Christoph Hellwig
  2 siblings, 2 replies; 9+ messages in thread
From: Hannes Reinecke @ 2020-10-05  9:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch, Hannes Reinecke

Add a sysfs attribute 'queue_if_no_path' to allow the admin to
view and modify the 'queue_if_no_path' flag.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 79b88b4c448f..87589f0adea3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3349,12 +3349,42 @@ static ssize_t nsid_show(struct device *dev, struct device_attribute *attr,
 }
 static DEVICE_ATTR_RO(nsid);
 
+static ssize_t queue_if_no_path_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct nvme_ns_head *head = dev_to_ns_head(dev);
+
+	return sprintf(buf, "%s\n",
+		       test_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags) ?
+		       "on" : "off");
+}
+
+static ssize_t queue_if_no_path_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct nvme_ns_head *head = dev_to_ns_head(dev);
+	int queue_if_no_path, err;
+
+	err = kstrtoint(buf, 10, &queue_if_no_path);
+	if (err)
+		return -EINVAL;
+
+	else if (queue_if_no_path <= 0)
+		clear_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags);
+	else
+		set_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags);
+	return count;
+}
+static DEVICE_ATTR(queue_if_no_path, S_IRUGO | S_IWUSR,
+	queue_if_no_path_show, queue_if_no_path_store);
+
 static struct attribute *nvme_ns_id_attrs[] = {
 	&dev_attr_wwid.attr,
 	&dev_attr_uuid.attr,
 	&dev_attr_nguid.attr,
 	&dev_attr_eui.attr,
 	&dev_attr_nsid.attr,
+	&dev_attr_queue_if_no_path.attr,
 #ifdef CONFIG_NVME_MULTIPATH
 	&dev_attr_ana_grpid.attr,
 	&dev_attr_ana_state.attr,
@@ -3381,6 +3411,10 @@ static umode_t nvme_ns_id_attrs_are_visible(struct kobject *kobj,
 		if (!memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
 			return 0;
 	}
+	if (a == &dev_attr_queue_if_no_path.attr) {
+		if (dev_to_disk(dev)->fops == &nvme_fops)
+			return 0;
+	}
 #ifdef CONFIG_NVME_MULTIPATH
 	if (a == &dev_attr_ana_grpid.attr || a == &dev_attr_ana_state.attr) {
 		if (dev_to_disk(dev)->fops != &nvme_fops) /* per-path attr */
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] nvme-mpath: delete disk after last connection
  2020-10-05  9:42 ` [PATCH 1/3] nvme-mpath: delete disk after last connection Hannes Reinecke
@ 2020-10-05 11:27   ` Christoph Hellwig
  0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-10-05 11:27 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Keith Busch, linux-nvme, Christoph Hellwig, Keith Busch, Sagi Grimberg

On Mon, Oct 05, 2020 at 11:42:54AM +0200, Hannes Reinecke wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 385b10317873..af950d58c562 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -474,7 +474,8 @@ static void nvme_free_ns_head(struct kref *ref)
>  	struct nvme_ns_head *head =
>  		container_of(ref, struct nvme_ns_head, ref);
>  
> -	nvme_mpath_remove_disk(head);
> +	if (head->disk)
> +		put_disk(head->disk);

This wasn't compile for the !CONFIG_NVME_MULTIPATH case.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] nvme: add 'queue_if_no_path' semantics
  2020-10-05  9:42 ` [PATCH 2/3] nvme: add 'queue_if_no_path' semantics Hannes Reinecke
@ 2020-10-05 11:29   ` Christoph Hellwig
  0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-10-05 11:29 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-nvme, Christoph Hellwig, Keith Busch, Sagi Grimberg

On Mon, Oct 05, 2020 at 11:42:55AM +0200, Hannes Reinecke wrote:
> Instead of reverting the handling of the 'last' path for all
> setups we should be restricting it to the non-fabrics case.
> So add a 'queue_if_no_path' flag which will switch to the
> current behaviour; disabling this flag will revert to the
> original (pre-fabrics) behaviour.
> And set this flag per default for fabrics.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/nvme/host/core.c      | 7 ++++++-
>  drivers/nvme/host/multipath.c | 4 ++++
>  drivers/nvme/host/nvme.h      | 1 +
>  3 files changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index af950d58c562..79b88b4c448f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -474,8 +474,11 @@ static void nvme_free_ns_head(struct kref *ref)
>  	struct nvme_ns_head *head =
>  		container_of(ref, struct nvme_ns_head, ref);
>  
> -	if (head->disk)
> +	if (head->disk) {
> +		if (test_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags))
> +			nvme_mpath_remove_disk(head);
>  		put_disk(head->disk);
> +	}

This needs to be in a helper in multipath.c so that it an compile fine
for the non-multipath case.

> @@ -3686,6 +3689,8 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
>  	head->subsys = ctrl->subsys;
>  	head->ns_id = nsid;
>  	head->ids = *ids;
> +	if (ctrl->ops->flags & NVME_F_FABRICS)
> +		set_bit(NVME_NSHEAD_QUEUE_IF_NO_PATH, &head->flags);

This needs a comment at very least.  In fact I'm not sure we should
mess with defaults here. 

How will this bit get set for the non-fabrics case?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute
  2020-10-05  9:42 ` [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute Hannes Reinecke
@ 2020-10-05 11:31   ` Christoph Hellwig
  2020-10-05 11:38   ` Christoph Hellwig
  1 sibling, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-10-05 11:31 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-nvme, Christoph Hellwig, Keith Busch, Sagi Grimberg

On Mon, Oct 05, 2020 at 11:42:56AM +0200, Hannes Reinecke wrote:
> Add a sysfs attribute 'queue_if_no_path' to allow the admin to
> view and modify the 'queue_if_no_path' flag.

ok, here it gets added.  I think this needs to be folded into the
previous patch.

> +{
> +	struct nvme_ns_head *head = dev_to_ns_head(dev);
> +	int queue_if_no_path, err;
> +
> +	err = kstrtoint(buf, 10, &queue_if_no_path);
> +	if (err)
> +		return -EINVAL;
> +
> +	else if (queue_if_no_path <= 0)

I think this needs to use kstrtobool..

>  #ifdef CONFIG_NVME_MULTIPATH
>  	&dev_attr_ana_grpid.attr,
>  	&dev_attr_ana_state.attr,
> @@ -3381,6 +3411,10 @@ static umode_t nvme_ns_id_attrs_are_visible(struct kobject *kobj,
>  		if (!memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
>  			return 0;
>  	}
> +	if (a == &dev_attr_queue_if_no_path.attr) {
> +		if (dev_to_disk(dev)->fops == &nvme_fops)
> +			return 0;
> +	}
>  #ifdef CONFIG_NVME_MULTIPATH

The attribute only makes sense for the multipathing code, so it should
be under CONFIG_NVME_MULTIPATH.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute
  2020-10-05  9:42 ` [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute Hannes Reinecke
  2020-10-05 11:31   ` Christoph Hellwig
@ 2020-10-05 11:38   ` Christoph Hellwig
  2020-10-05 11:56     ` Hannes Reinecke
  1 sibling, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2020-10-05 11:38 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-nvme, Christoph Hellwig, Keith Busch, Sagi Grimberg

Oh, and shouldn't the attribute be per-subsystem, similar to the
iopolicy one?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute
  2020-10-05 11:38   ` Christoph Hellwig
@ 2020-10-05 11:56     ` Hannes Reinecke
  0 siblings, 0 replies; 9+ messages in thread
From: Hannes Reinecke @ 2020-10-05 11:56 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-nvme, Sagi Grimberg, Keith Busch

On 10/5/20 1:38 PM, Christoph Hellwig wrote:
> Oh, and shouldn't the attribute be per-subsystem, similar to the
> iopolicy one?
> 
Well, I've thought about that, too; but then figured that it's pretty 
much an admin choice.
He _might_ want to choose either behaviour, depending on the use-case 
(read: MD raid or cluster scenarios). But these use-cases are pretty 
much per namespace, so I'm not sure if per-subsystem is a good fit here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-05 11:56 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-05  9:42 [RFC PATCHv2 0/3] nvme: queue_if_no_path functionality Hannes Reinecke
2020-10-05  9:42 ` [PATCH 1/3] nvme-mpath: delete disk after last connection Hannes Reinecke
2020-10-05 11:27   ` Christoph Hellwig
2020-10-05  9:42 ` [PATCH 2/3] nvme: add 'queue_if_no_path' semantics Hannes Reinecke
2020-10-05 11:29   ` Christoph Hellwig
2020-10-05  9:42 ` [PATCH 3/3] nvme: add 'queue_if_no_path' sysfs attribute Hannes Reinecke
2020-10-05 11:31   ` Christoph Hellwig
2020-10-05 11:38   ` Christoph Hellwig
2020-10-05 11:56     ` Hannes Reinecke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.