All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] nvmet: add some more configfs attributes
@ 2019-03-29 12:04 Hannes Reinecke
  2019-03-29 12:04 ` [PATCH 1/2] nvmet: make MDTS value configurable Hannes Reinecke
  2019-03-29 12:04 ` [PATCH 2/2] nvmet: make CNTLID range configurable Hannes Reinecke
  0 siblings, 2 replies; 13+ messages in thread
From: Hannes Reinecke @ 2019-03-29 12:04 UTC (permalink / raw)


Hi all,

here are two small patches to make the 'MDTS' value and the 'CNTLID'
range configurable. The first is primarily for debugging, but the other
is required if the nvme target is deployed in a cluster.

As usual, comments and reviews are welcome.

Hannes Reinecke (2):
  nvmet: make MDTS value configurable
  nvmet: make CNTLID range configurable

 drivers/nvme/target/admin-cmd.c |  3 +-
 drivers/nvme/target/configfs.c  | 87 +++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/core.c      |  4 +-
 drivers/nvme/target/nvmet.h     |  4 ++
 4 files changed, 95 insertions(+), 3 deletions(-)

-- 
2.16.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-03-29 12:04 [PATCH 0/2] nvmet: add some more configfs attributes Hannes Reinecke
@ 2019-03-29 12:04 ` Hannes Reinecke
  2019-03-29 17:27   ` Heitke, Kenneth
  2019-03-31 15:04   ` Max Gurtovoy
  2019-03-29 12:04 ` [PATCH 2/2] nvmet: make CNTLID range configurable Hannes Reinecke
  1 sibling, 2 replies; 13+ messages in thread
From: Hannes Reinecke @ 2019-03-29 12:04 UTC (permalink / raw)


For testing the bio splitting algorithm it's useful to be able to
modify the mdts value.

Signed-off-by: Hannes Reinecke <hare at suse.com>
---
 drivers/nvme/target/admin-cmd.c |  3 +--
 drivers/nvme/target/configfs.c  | 27 +++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h     |  2 ++
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 76250181fee0..1c4cb371fac8 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -311,8 +311,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 	/* we support multiple ports, multiples hosts and ANA: */
 	id->cmic = (1 << 0) | (1 << 1) | (1 << 3);
 
-	/* no limit on data transfer sizes for now */
-	id->mdts = 0;
+	id->mdts = ctrl->subsys->mdts;
 	id->cntlid = cpu_to_le16(ctrl->cntlid);
 	id->ver = cpu_to_le32(ctrl->subsys->ver);
 
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index adb79545cdd7..63b9991de79e 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -859,10 +859,37 @@ static ssize_t nvmet_subsys_attr_serial_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_serial);
 
+static ssize_t nvmet_subsys_attr_mdts_show(struct config_item *item,
+					   char *page)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", subsys->mdts);
+}
+
+static ssize_t nvmet_subsys_attr_mdts_store(struct config_item *item,
+					    const char *page, size_t count)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+	unsigned int mdts;
+	int ret;
+
+	ret = kstrtou32(page, 0, &mdts);
+	if (ret)
+		return ret;
+	down_write(&nvmet_config_sem);
+	subsys->mdts = mdts;
+	up_write(&nvmet_config_sem);
+
+	return count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_mdts);
+
 static struct configfs_attribute *nvmet_subsys_attrs[] = {
 	&nvmet_subsys_attr_attr_allow_any_host,
 	&nvmet_subsys_attr_attr_version,
 	&nvmet_subsys_attr_attr_serial,
+	&nvmet_subsys_attr_attr_mdts,
 	NULL,
 };
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 51e49efd7849..f05f069f83ea 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -217,6 +217,8 @@ struct nvmet_subsys {
 	bool			allow_any_host;
 
 	u16			max_qid;
+	u8			mdts;
+	u8			mpsmin;
 
 	u64			ver;
 	u64			serial;
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] nvmet: make CNTLID range configurable
  2019-03-29 12:04 [PATCH 0/2] nvmet: add some more configfs attributes Hannes Reinecke
  2019-03-29 12:04 ` [PATCH 1/2] nvmet: make MDTS value configurable Hannes Reinecke
@ 2019-03-29 12:04 ` Hannes Reinecke
  2019-04-03 18:11   ` Christoph Hellwig
  1 sibling, 1 reply; 13+ messages in thread
From: Hannes Reinecke @ 2019-03-29 12:04 UTC (permalink / raw)


When trying to run in a clustered environment we need to keep the
controller ID values unique. So this patch add to additional config
attributes 'cntlid_min' and 'cntlid_max' to only allow specific ranges
for the controller ID.

Signed-off-by: Hannes Reinecke <hare at suse.com>
---
 drivers/nvme/target/configfs.c | 60 ++++++++++++++++++++++++++++++++++++++++++
 drivers/nvme/target/core.c     |  4 ++-
 drivers/nvme/target/nvmet.h    |  2 ++
 3 files changed, 65 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 63b9991de79e..07f837abea62 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -885,11 +885,71 @@ static ssize_t nvmet_subsys_attr_mdts_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_mdts);
 
+static ssize_t nvmet_subsys_attr_cntlid_min_show(struct config_item *item,
+						 char *page)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", subsys->min_cntlid);
+}
+
+static ssize_t nvmet_subsys_attr_cntlid_min_store(struct config_item *item,
+						  const char *page, size_t count)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+	unsigned int min;
+	int ret;
+
+	ret = kstrtou32(page, 0, &min);
+	if (ret)
+		return ret;
+	down_write(&nvmet_config_sem);
+	if (min >= subsys->max_cntlid || min < NVME_CNTLID_MIN)
+		ret = -EINVAL;
+	else
+		subsys->min_cntlid = min;
+	up_write(&nvmet_config_sem);
+
+	return ret ? ret : count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_cntlid_min);
+
+static ssize_t nvmet_subsys_attr_cntlid_max_show(struct config_item *item,
+						 char *page)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", subsys->max_cntlid);
+}
+
+static ssize_t nvmet_subsys_attr_cntlid_max_store(struct config_item *item,
+						  const char *page, size_t count)
+{
+	struct nvmet_subsys *subsys = to_subsys(item);
+	unsigned int max;
+	int ret;
+
+	ret = kstrtou32(page, 0, &max);
+	if (ret)
+		return ret;
+	down_write(&nvmet_config_sem);
+	if (max >= NVME_CNTLID_MAX || max < subsys->min_cntlid)
+		ret = -EINVAL;
+	else
+		subsys->max_cntlid = max;
+	up_write(&nvmet_config_sem);
+
+	return ret ? ret : count;
+}
+CONFIGFS_ATTR(nvmet_subsys_, attr_cntlid_max);
+
 static struct configfs_attribute *nvmet_subsys_attrs[] = {
 	&nvmet_subsys_attr_attr_allow_any_host,
 	&nvmet_subsys_attr_attr_version,
 	&nvmet_subsys_attr_attr_serial,
 	&nvmet_subsys_attr_attr_mdts,
+	&nvmet_subsys_attr_attr_cntlid_min,
+	&nvmet_subsys_attr_attr_cntlid_max,
 	NULL,
 };
 
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 2d73b66e3686..fe60734b421d 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1241,7 +1241,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 		goto out_free_cqs;
 
 	ret = ida_simple_get(&cntlid_ida,
-			     NVME_CNTLID_MIN, NVME_CNTLID_MAX,
+			     subsys->min_cntlid, subsys->max_cntlid,
 			     GFP_KERNEL);
 	if (ret < 0) {
 		status = NVME_SC_CONNECT_CTRL_BUSY | NVME_SC_DNR;
@@ -1382,6 +1382,8 @@ struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn,
 		kfree(subsys);
 		return NULL;
 	}
+	subsys->min_cntlid = NVME_CNTLID_MIN;
+	subsys->max_cntlid = NVME_CNTLID_MAX;
 	subsys->type = type;
 	subsys->subsysnqn = kstrndup(subsysnqn, NVMF_NQN_SIZE,
 			GFP_KERNEL);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index f05f069f83ea..7bc81bf7d45e 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -212,6 +212,8 @@ struct nvmet_subsys {
 	unsigned int		max_nsid;
 
 	struct list_head	ctrls;
+	unsigned int		min_cntlid;
+	unsigned int		max_cntlid;
 
 	struct list_head	hosts;
 	bool			allow_any_host;
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-03-29 12:04 ` [PATCH 1/2] nvmet: make MDTS value configurable Hannes Reinecke
@ 2019-03-29 17:27   ` Heitke, Kenneth
  2019-03-31 15:04   ` Max Gurtovoy
  1 sibling, 0 replies; 13+ messages in thread
From: Heitke, Kenneth @ 2019-03-29 17:27 UTC (permalink / raw)




On 3/29/2019 6:04 AM, Hannes Reinecke wrote:
> For testing the bio splitting algorithm it's useful to be able to
> modify the mdts value.
> 
> Signed-off-by: Hannes Reinecke <hare at suse.com>
> ---
>   drivers/nvme/target/admin-cmd.c |  3 +--
>   drivers/nvme/target/configfs.c  | 27 +++++++++++++++++++++++++++
>   drivers/nvme/target/nvmet.h     |  2 ++
>   3 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
> index 76250181fee0..1c4cb371fac8 100644
> --- a/drivers/nvme/target/admin-cmd.c
> +++ b/drivers/nvme/target/admin-cmd.c
> @@ -311,8 +311,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
>   	/* we support multiple ports, multiples hosts and ANA: */
>   	id->cmic = (1 << 0) | (1 << 1) | (1 << 3);
>   
> -	/* no limit on data transfer sizes for now */
> -	id->mdts = 0;
> +	id->mdts = ctrl->subsys->mdts;
>   	id->cntlid = cpu_to_le16(ctrl->cntlid);
>   	id->ver = cpu_to_le32(ctrl->subsys->ver);
>   
> diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
> index adb79545cdd7..63b9991de79e 100644
> --- a/drivers/nvme/target/configfs.c
> +++ b/drivers/nvme/target/configfs.c
> @@ -859,10 +859,37 @@ static ssize_t nvmet_subsys_attr_serial_store(struct config_item *item,
>   }
>   CONFIGFS_ATTR(nvmet_subsys_, attr_serial);
>   
> +static ssize_t nvmet_subsys_attr_mdts_show(struct config_item *item,
> +					   char *page)
> +{
> +	struct nvmet_subsys *subsys = to_subsys(item);
> +
> +	return snprintf(page, PAGE_SIZE, "%u\n", subsys->mdts);
> +}
> +
> +static ssize_t nvmet_subsys_attr_mdts_store(struct config_item *item,
> +					    const char *page, size_t count)
> +{
> +	struct nvmet_subsys *subsys = to_subsys(item);
> +	unsigned int mdts;
> +	int ret;
> +
> +	ret = kstrtou32(page, 0, &mdts);
> +	if (ret)
> +		return ret;
> +	down_write(&nvmet_config_sem);
> +	subsys->mdts = mdts;

What is the convention, if any, for taking a possible 32-bit value and
putting it into an 8-bit variable? Should there be a warning if the
value is greater than the max allowed?

> +	up_write(&nvmet_config_sem);
> +
> +	return count;
> +}
> +CONFIGFS_ATTR(nvmet_subsys_, attr_mdts);
> +
>   static struct configfs_attribute *nvmet_subsys_attrs[] = {
>   	&nvmet_subsys_attr_attr_allow_any_host,
>   	&nvmet_subsys_attr_attr_version,
>   	&nvmet_subsys_attr_attr_serial,
> +	&nvmet_subsys_attr_attr_mdts,
>   	NULL,
>   };
>   
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index 51e49efd7849..f05f069f83ea 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -217,6 +217,8 @@ struct nvmet_subsys {
>   	bool			allow_any_host;
>   
>   	u16			max_qid;
> +	u8			mdts;
> +	u8			mpsmin;
>   
>   	u64			ver;
>   	u64			serial;
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-03-29 12:04 ` [PATCH 1/2] nvmet: make MDTS value configurable Hannes Reinecke
  2019-03-29 17:27   ` Heitke, Kenneth
@ 2019-03-31 15:04   ` Max Gurtovoy
  2019-04-01  2:02     ` Hannes Reinecke
  1 sibling, 1 reply; 13+ messages in thread
From: Max Gurtovoy @ 2019-03-31 15:04 UTC (permalink / raw)



On 3/29/2019 3:04 PM, Hannes Reinecke wrote:
> For testing the bio splitting algorithm it's useful to be able to
> modify the mdts value.
>
> Signed-off-by: Hannes Reinecke <hare at suse.com>
> ---
>   drivers/nvme/target/admin-cmd.c |  3 +--
>   drivers/nvme/target/configfs.c  | 27 +++++++++++++++++++++++++++
>   drivers/nvme/target/nvmet.h     |  2 ++
>   3 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
> index 76250181fee0..1c4cb371fac8 100644
> --- a/drivers/nvme/target/admin-cmd.c
> +++ b/drivers/nvme/target/admin-cmd.c
> @@ -311,8 +311,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
>   	/* we support multiple ports, multiples hosts and ANA: */
>   	id->cmic = (1 << 0) | (1 << 1) | (1 << 3);
>   
> -	/* no limit on data transfer sizes for now */
> -	id->mdts = 0;
> +	id->mdts = ctrl->subsys->mdts;

I think it's more of a ctrl/nvmet_port configuration than a subsystem.

For example if you have 2 HCAs, one is super strong and can transfer 
upto 8MB and the second is old model and can transfer upto 64KiB.

do

id->mdts = ctrl->ops->port_mdts(req->port);

and inside the implementation you can do

min_not_zero(max_port_hca_cap_in_mdts_units, mdts_configfs_val);

and it will fix the comment that is written above (we're not really 
unlimited).

this way we're using the low level device characteristics and also cover 
the bio split test.

thoughts ?


>   	id->cntlid = cpu_to_le16(ctrl->cntlid);
>   	id->ver = cpu_to_le32(ctrl->subsys->ver);
>   
> diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
> index adb79545cdd7..63b9991de79e 100644
> --- a/drivers/nvme/target/configfs.c
> +++ b/drivers/nvme/target/configfs.c
> @@ -859,10 +859,37 @@ static ssize_t nvmet_subsys_attr_serial_store(struct config_item *item,
>   }
>   CONFIGFS_ATTR(nvmet_subsys_, attr_serial);
>   
> +static ssize_t nvmet_subsys_attr_mdts_show(struct config_item *item,
> +					   char *page)
> +{
> +	struct nvmet_subsys *subsys = to_subsys(item);
> +
> +	return snprintf(page, PAGE_SIZE, "%u\n", subsys->mdts);
> +}
> +
> +static ssize_t nvmet_subsys_attr_mdts_store(struct config_item *item,
> +					    const char *page, size_t count)
> +{
> +	struct nvmet_subsys *subsys = to_subsys(item);
> +	unsigned int mdts;
> +	int ret;
> +
> +	ret = kstrtou32(page, 0, &mdts);
> +	if (ret)
> +		return ret;
> +	down_write(&nvmet_config_sem);
> +	subsys->mdts = mdts;
> +	up_write(&nvmet_config_sem);
> +
> +	return count;
> +}
> +CONFIGFS_ATTR(nvmet_subsys_, attr_mdts);
> +
>   static struct configfs_attribute *nvmet_subsys_attrs[] = {
>   	&nvmet_subsys_attr_attr_allow_any_host,
>   	&nvmet_subsys_attr_attr_version,
>   	&nvmet_subsys_attr_attr_serial,
> +	&nvmet_subsys_attr_attr_mdts,
>   	NULL,
>   };
>   
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index 51e49efd7849..f05f069f83ea 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -217,6 +217,8 @@ struct nvmet_subsys {
>   	bool			allow_any_host;
>   
>   	u16			max_qid;
> +	u8			mdts;
> +	u8			mpsmin;


are you using mpsmin ?


>   
>   	u64			ver;
>   	u64			serial;

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-03-31 15:04   ` Max Gurtovoy
@ 2019-04-01  2:02     ` Hannes Reinecke
  2019-04-01  3:10       ` Keith Busch
  0 siblings, 1 reply; 13+ messages in thread
From: Hannes Reinecke @ 2019-04-01  2:02 UTC (permalink / raw)


On 3/31/19 5:04 PM, Max Gurtovoy wrote:
> 
> On 3/29/2019 3:04 PM, Hannes Reinecke wrote:
>> For testing the bio splitting algorithm it's useful to be able to
>> modify the mdts value.
>>
>> Signed-off-by: Hannes Reinecke <hare at suse.com>
>> ---
>> ? drivers/nvme/target/admin-cmd.c |? 3 +--
>> ? drivers/nvme/target/configfs.c? | 27 +++++++++++++++++++++++++++
>> ? drivers/nvme/target/nvmet.h???? |? 2 ++
>> ? 3 files changed, 30 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvme/target/admin-cmd.c 
>> b/drivers/nvme/target/admin-cmd.c
>> index 76250181fee0..1c4cb371fac8 100644
>> --- a/drivers/nvme/target/admin-cmd.c
>> +++ b/drivers/nvme/target/admin-cmd.c
>> @@ -311,8 +311,7 @@ static void nvmet_execute_identify_ctrl(struct 
>> nvmet_req *req)
>> ????? /* we support multiple ports, multiples hosts and ANA: */
>> ????? id->cmic = (1 << 0) | (1 << 1) | (1 << 3);
>> -??? /* no limit on data transfer sizes for now */
>> -??? id->mdts = 0;
>> +??? id->mdts = ctrl->subsys->mdts;
> 
> I think it's more of a ctrl/nvmet_port configuration than a subsystem.
> 
> For example if you have 2 HCAs, one is super strong and can transfer 
> upto 8MB and the second is old model and can transfer upto 64KiB.
> 
> do
> 
> id->mdts = ctrl->ops->port_mdts(req->port);
> 
> and inside the implementation you can do
> 
> min_not_zero(max_port_hca_cap_in_mdts_units, mdts_configfs_val);
> 
> and it will fix the comment that is written above (we're not really 
> unlimited).
> 
> this way we're using the low level device characteristics and also cover 
> the bio split test.
> 
> thoughts ?
> 
> 
Yeah, true, it should be a port configuration.

I'll be updating the patch.

Cheers,

Hannes

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-04-01  2:02     ` Hannes Reinecke
@ 2019-04-01  3:10       ` Keith Busch
  2019-04-01 11:48         ` Hannes Reinecke
  2019-04-03 14:53         ` Christoph Hellwig
  0 siblings, 2 replies; 13+ messages in thread
From: Keith Busch @ 2019-04-01  3:10 UTC (permalink / raw)


On Sun, Mar 31, 2019@07:02:40PM -0700, Hannes Reinecke wrote:
> > I think it's more of a ctrl/nvmet_port configuration than a subsystem.
> > 
> > For example if you have 2 HCAs, one is super strong and can transfer 
> > upto 8MB and the second is old model and can transfer upto 64KiB.
> > 
> > do
> > 
> > id->mdts = ctrl->ops->port_mdts(req->port);
> > 
> > and inside the implementation you can do
> > 
> > min_not_zero(max_port_hca_cap_in_mdts_units, mdts_configfs_val);
> > 
> > and it will fix the comment that is written above (we're not really 
> > unlimited).
> > 
> > this way we're using the low level device characteristics and also cover 
> > the bio split test.
> > 
> > thoughts ?
> > 
> > 
> Yeah, true, it should be a port configuration.
> 
> I'll be updating the patch.

Wouldn't this indicate MDTS should be driven from hardware constraints
rather than user tunable knobs? I don't think we should have user
parameters if their only purpose is to test how host drivers react
to them. It's okay if they've a functional purpose, but there are so
many adjustable nvme parameters, this is a bit of a slippery slope if we
want to turn the nvme target driver into a host driver test vehicle.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-04-01  3:10       ` Keith Busch
@ 2019-04-01 11:48         ` Hannes Reinecke
  2019-04-02 12:06           ` Max Gurtovoy
  2019-04-03 14:53         ` Christoph Hellwig
  1 sibling, 1 reply; 13+ messages in thread
From: Hannes Reinecke @ 2019-04-01 11:48 UTC (permalink / raw)


On 4/1/19 5:10 AM, Keith Busch wrote:
> On Sun, Mar 31, 2019@07:02:40PM -0700, Hannes Reinecke wrote:
>>> I think it's more of a ctrl/nvmet_port configuration than a subsystem.
>>>
>>> For example if you have 2 HCAs, one is super strong and can transfer
>>> upto 8MB and the second is old model and can transfer upto 64KiB.
>>>
>>> do
>>>
>>> id->mdts = ctrl->ops->port_mdts(req->port);
>>>
>>> and inside the implementation you can do
>>>
>>> min_not_zero(max_port_hca_cap_in_mdts_units, mdts_configfs_val);
>>>
>>> and it will fix the comment that is written above (we're not really
>>> unlimited).
>>>
>>> this way we're using the low level device characteristics and also cover
>>> the bio split test.
>>>
>>> thoughts ?
>>>
>>>
>> Yeah, true, it should be a port configuration.
>>
>> I'll be updating the patch.
> 
> Wouldn't this indicate MDTS should be driven from hardware constraints
> rather than user tunable knobs? I don't think we should have user
> parameters if their only purpose is to test how host drivers react
> to them. It's okay if they've a functional purpose, but there are so
> many adjustable nvme parameters, this is a bit of a slippery slope if we
> want to turn the nvme target driver into a host driver test vehicle.
> 
I would love to indicate hardware constraints to nvme target; there's a 
similar problem I'm facing in that the hardware is using only a limited 
set of queues, but I cannot express this with nvme target.
However, in the absense of such an interface having a manual 
configuration allows us to at least _map_ onto the hardware capabilities.

But I don't really buy the 'too many parameters' objection.
What _is_ the point of the software target if we cannot tweak it to test 
out certain corner cases in the spec?

Cheers,

Hannes
-- 
r. Hannes Reinecke            Teamlead Storage & Networking
hare at suse.de                              +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-04-01 11:48         ` Hannes Reinecke
@ 2019-04-02 12:06           ` Max Gurtovoy
  2019-04-02 23:31             ` Keith Busch
  0 siblings, 1 reply; 13+ messages in thread
From: Max Gurtovoy @ 2019-04-02 12:06 UTC (permalink / raw)



On 4/1/2019 2:48 PM, Hannes Reinecke wrote:
> On 4/1/19 5:10 AM, Keith Busch wrote:
>> On Sun, Mar 31, 2019@07:02:40PM -0700, Hannes Reinecke wrote:
>>>> I think it's more of a ctrl/nvmet_port configuration than a subsystem.
>>>>
>>>> For example if you have 2 HCAs, one is super strong and can transfer
>>>> upto 8MB and the second is old model and can transfer upto 64KiB.
>>>>
>>>> do
>>>>
>>>> id->mdts = ctrl->ops->port_mdts(req->port);
>>>>
>>>> and inside the implementation you can do
>>>>
>>>> min_not_zero(max_port_hca_cap_in_mdts_units, mdts_configfs_val);
>>>>
>>>> and it will fix the comment that is written above (we're not really
>>>> unlimited).
>>>>
>>>> this way we're using the low level device characteristics and also 
>>>> cover
>>>> the bio split test.
>>>>
>>>> thoughts ?
>>>>
>>>>
>>> Yeah, true, it should be a port configuration.
>>>
>>> I'll be updating the patch.
>>
>> Wouldn't this indicate MDTS should be driven from hardware constraints
>> rather than user tunable knobs? I don't think we should have user
>> parameters if their only purpose is to test how host drivers react
>> to them. It's okay if they've a functional purpose, but there are so
>> many adjustable nvme parameters, this is a bit of a slippery slope if we
>> want to turn the nvme target driver into a host driver test vehicle.
>>
> I would love to indicate hardware constraints to nvme target; there's 
> a similar problem I'm facing in that the hardware is using only a 
> limited set of queues, but I cannot express this with nvme target.
> However, in the absense of such an interface having a manual 
> configuration allows us to at least _map_ onto the hardware capabilities.

my suggestion exactly check the HW constraint.

My suggestion also makes sure that a greedy user will get a clamp and an 
economical user can save resources at the target.

Let's say I have a very "weak" target server with super HCA 
(ConnectX-5), so in that case I would like to set mdts=4 (or so) and let 
the initiator block layer to deal with 1MB IO splits.

Keith,

although NVMe has many params, I guess we can make some of the most 
vital ones to be tunable.

I agree regarding code adaptation for test-cases (we shouldn't change 
driver in order to write tests), but in this case it's a real world 
scenario.

>
> But I don't really buy the 'too many parameters' objection.

we need to take that into consideration as well and expose only vital 
params (we don't want to have hundreds of configfs entries as iSCSI)


> What _is_ the point of the software target if we cannot tweak it to 
> test out certain corner cases in the spec?
>
> Cheers,
>
> Hannes


-Max.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-04-02 12:06           ` Max Gurtovoy
@ 2019-04-02 23:31             ` Keith Busch
  0 siblings, 0 replies; 13+ messages in thread
From: Keith Busch @ 2019-04-02 23:31 UTC (permalink / raw)


On Tue, Apr 02, 2019@03:06:02PM +0300, Max Gurtovoy wrote:
> On 4/1/2019 2:48 PM, Hannes Reinecke wrote:
> > On 4/1/19 5:10 AM, Keith Busch wrote:
> > > 
> > > Wouldn't this indicate MDTS should be driven from hardware constraints
> > > rather than user tunable knobs? I don't think we should have user
> > > parameters if their only purpose is to test how host drivers react
> > > to them. It's okay if they've a functional purpose, but there are so
> > > many adjustable nvme parameters, this is a bit of a slippery slope if we
> > > want to turn the nvme target driver into a host driver test vehicle.
> > > 
> > I would love to indicate hardware constraints to nvme target; there's a
> > similar problem I'm facing in that the hardware is using only a limited
> > set of queues, but I cannot express this with nvme target.
> > However, in the absense of such an interface having a manual
> > configuration allows us to at least _map_ onto the hardware
> > capabilities.
> 
> my suggestion exactly check the HW constraint.
> 
> My suggestion also makes sure that a greedy user will get a clamp and an
> economical user can save resources at the target.
> 
> Let's say I have a very "weak" target server with super HCA (ConnectX-5), so
> in that case I would like to set mdts=4 (or so) and let the initiator block
> layer to deal with 1MB IO splits.
> 
> Keith,
> 
> although NVMe has many params, I guess we can make some of the most vital
> ones to be tunable.
> 
> I agree regarding code adaptation for test-cases (we shouldn't change driver
> in order to write tests), but in this case it's a real world scenario.

Fair enough, I am all for config options that serve to help the nvme
target work better.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] nvmet: make MDTS value configurable
  2019-04-01  3:10       ` Keith Busch
  2019-04-01 11:48         ` Hannes Reinecke
@ 2019-04-03 14:53         ` Christoph Hellwig
  1 sibling, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2019-04-03 14:53 UTC (permalink / raw)


On Sun, Mar 31, 2019@09:10:37PM -0600, Keith Busch wrote:
> Wouldn't this indicate MDTS should be driven from hardware constraints
> rather than user tunable knobs? I don't think we should have user
> parameters if their only purpose is to test how host drivers react
> to them. It's okay if they've a functional purpose, but there are so
> many adjustable nvme parameters, this is a bit of a slippery slope if we
> want to turn the nvme target driver into a host driver test vehicle.

Agreed.  In that past we've not merged patches that just include
tunables for testing purposes, and I'm inclined to keep that stance.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/2] nvmet: make CNTLID range configurable
  2019-03-29 12:04 ` [PATCH 2/2] nvmet: make CNTLID range configurable Hannes Reinecke
@ 2019-04-03 18:11   ` Christoph Hellwig
  2019-04-03 22:13     ` Hannes Reinecke
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2019-04-03 18:11 UTC (permalink / raw)


On Fri, Mar 29, 2019@01:04:04PM +0100, Hannes Reinecke wrote:
> When trying to run in a clustered environment we need to keep the
> controller ID values unique. So this patch add to additional config
> attributes 'cntlid_min' and 'cntlid_max' to only allow specific ranges
> for the controller ID.

Err, can you explain the use case a little better?  This sounds like
running two different nvmet instances and preting they are a single
subsystem.  Which sounds pretty dangerous without additional
communication layers.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/2] nvmet: make CNTLID range configurable
  2019-04-03 18:11   ` Christoph Hellwig
@ 2019-04-03 22:13     ` Hannes Reinecke
  0 siblings, 0 replies; 13+ messages in thread
From: Hannes Reinecke @ 2019-04-03 22:13 UTC (permalink / raw)


On 4/3/19 8:11 PM, Christoph Hellwig wrote:
> On Fri, Mar 29, 2019@01:04:04PM +0100, Hannes Reinecke wrote:
>> When trying to run in a clustered environment we need to keep the
>> controller ID values unique. So this patch add to additional config
>> attributes 'cntlid_min' and 'cntlid_max' to only allow specific ranges
>> for the controller ID.
> 
> Err, can you explain the use case a little better?  This sounds like
> running two different nvmet instances and preting they are a single
> subsystem.  Which sounds pretty dangerous without additional
> communication layers.
> 
I'm running three qemu instances with RDMA HCAs via vfio.
Two instances are accessing the same backing device and exporting that 
via NVME-oF to the third; one exports the device as 'optimized' and the 
other as 'non-optimized'.
Idea it to power-cycle/reset/crash the first instance and test failover.

Cheers,

Hannes

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-04-03 22:13 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-29 12:04 [PATCH 0/2] nvmet: add some more configfs attributes Hannes Reinecke
2019-03-29 12:04 ` [PATCH 1/2] nvmet: make MDTS value configurable Hannes Reinecke
2019-03-29 17:27   ` Heitke, Kenneth
2019-03-31 15:04   ` Max Gurtovoy
2019-04-01  2:02     ` Hannes Reinecke
2019-04-01  3:10       ` Keith Busch
2019-04-01 11:48         ` Hannes Reinecke
2019-04-02 12:06           ` Max Gurtovoy
2019-04-02 23:31             ` Keith Busch
2019-04-03 14:53         ` Christoph Hellwig
2019-03-29 12:04 ` [PATCH 2/2] nvmet: make CNTLID range configurable Hannes Reinecke
2019-04-03 18:11   ` Christoph Hellwig
2019-04-03 22:13     ` Hannes Reinecke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.