All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/5] sqsize fixes
@ 2016-08-13  0:04 Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec Jay Freyensee
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Jay Freyensee @ 2016-08-13  0:04 UTC (permalink / raw)


This is the stable release of the sqsize fixes, based on making
sure sqsize is defined as a zero-based value throughout the code,
per NVMe-over-Fabrics spec.

Patch 1 assigns the admin sqsize to 32, minimum required per spec.
Patch 2-4 takes care of adjusting the code base to use sqsize as 0-based
Patch 5 adjusts hrqsize plus 1

Thus one can take their pick and decide if patch 5 is required based
on NVMe-over-Fabrics spec discussions...you can have your cake
and eat it too :-).

Changes from v0:
 - found all the sqsize dependencies and adjusted them accordingly
 - nvmf_connect_admin_queue() always uses NVMF_AQ_DEPTH for sqsize
 - final patch to adjust hrqsize only, so the series can be easily
   tested w/hrqsize == hrsqsize (patches 1-4) and hrqsize == hrsqsize+1
   (patch 5)

Jay Freyensee (5):
  fabrics: define admin sqsize min default, per spec
  nvme-rdma: fix sqsize/hsqsize per spec
  nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize
  nvme-loop: set sqsize to 0-based value, per spec
  nvme-rdma: adjust hrqsize to plus 1

 drivers/nvme/host/fabrics.c |  9 ++++++++-
 drivers/nvme/host/rdma.c    | 19 ++++++++++++++-----
 drivers/nvme/target/loop.c  |  4 ++--
 drivers/nvme/target/rdma.c  |  9 +++++----
 4 files changed, 29 insertions(+), 12 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec
  2016-08-13  0:04 [PATCH v1 0/5] sqsize fixes Jay Freyensee
@ 2016-08-13  0:04 ` Jay Freyensee
  2016-08-15 15:55   ` Verkamp, Daniel
  2016-08-13  0:04 ` [PATCH v1 2/5] nvme-rdma: fix sqsize/hsqsize " Jay Freyensee
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Jay Freyensee @ 2016-08-13  0:04 UTC (permalink / raw)


Upon admin queue connect(), the rdma qp was being
set based on NVMF_AQ_DEPTH.  However, the fabrics layer was
using the sqsize field value set for I/O queues for the admin
queue, which through the nvme layer and rdma layer off-whack:

root at fedora23-fabrics-host1 nvmf]# dmesg
[ 3507.798642] nvme_fabrics: nvmf_connect_admin_queue():admin sqsize
being sent is: 128
[ 3507.798858] nvme nvme0: creating 16 I/O queues.
[ 3507.896407] nvme nvme0: new ctrl: NQN "nullside-nqn", addr
192.168.1.3:4420

Thus, to have a different admin queue value, we use
NVMF_AQ_DEPTH for connect() and RDMA private data
as the minimum depth specified in the NVMe-over-Fabrics 1.0 spec.

Reported-by: Daniel Verkamp <daniel.verkamp at intel.com>
Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
---
 drivers/nvme/host/fabrics.c |  9 ++++++++-
 drivers/nvme/host/rdma.c    | 13 +++++++++++--
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index dc99676..020302c 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -363,7 +363,14 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
 	cmd.connect.opcode = nvme_fabrics_command;
 	cmd.connect.fctype = nvme_fabrics_type_connect;
 	cmd.connect.qid = 0;
-	cmd.connect.sqsize = cpu_to_le16(ctrl->sqsize);
+
+	/*
+	 * fabrics spec sets a minimum of depth 32 for admin queue,
+	 * so set the queue with this depth always until
+	 * justification otherwise.
+	 */
+	cmd.connect.sqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
+
 	/*
 	 * Set keep-alive timeout in seconds granularity (ms * 1000)
 	 * and add a grace period for controller kato enforcement
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 3e3ce2b..168cd23 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1284,8 +1284,17 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
 
 	priv.recfmt = cpu_to_le16(NVME_RDMA_CM_FMT_1_0);
 	priv.qid = cpu_to_le16(nvme_rdma_queue_idx(queue));
-	priv.hrqsize = cpu_to_le16(queue->queue_size);
-	priv.hsqsize = cpu_to_le16(queue->queue_size);
+	/*
+	 * set the admin queue depth to the minimum size
+	 * specified by the Fabrics standard.
+	 */
+	if (priv.qid == 0) {
+		priv.hrqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
+		priv.hsqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
+	} else {
+		priv.hrqsize = cpu_to_le16(queue->queue_size);
+		priv.hsqsize = cpu_to_le16(queue->queue_size);
+	}
 
 	ret = rdma_connect(queue->cm_id, &param);
 	if (ret) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 2/5] nvme-rdma: fix sqsize/hsqsize per spec
  2016-08-13  0:04 [PATCH v1 0/5] sqsize fixes Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec Jay Freyensee
@ 2016-08-13  0:04 ` Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize Jay Freyensee
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Jay Freyensee @ 2016-08-13  0:04 UTC (permalink / raw)


Per NVMe-over-Fabrics 1.0 spec, sqsize is represented as
a 0-based value.

Also per spec, the RDMA binding values shall be set
to sqsize, which makes hsqsize 0-based values.

Thus, the sqsize during NVMf connect() is now:

[root at fedora23-fabrics-host1 for-48]# dmesg
[  318.720645] nvme_fabrics: nvmf_connect_admin_queue(): sqsize for
admin queue: 31
[  318.720884] nvme nvme0: creating 16 I/O queues.
[  318.810114] nvme_fabrics: nvmf_connect_io_queue(): sqsize for i/o
queue: 127

Reported-by: Daniel Verkamp <daniel.verkamp at intel.com>
Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
---
 drivers/nvme/host/rdma.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 168cd23..6aa913e 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -649,7 +649,7 @@ static int nvme_rdma_init_io_queues(struct nvme_rdma_ctrl *ctrl)
 	int i, ret;
 
 	for (i = 1; i < ctrl->queue_count; i++) {
-		ret = nvme_rdma_init_queue(ctrl, i, ctrl->ctrl.sqsize);
+		ret = nvme_rdma_init_queue(ctrl, i, ctrl->ctrl.sqsize + 1);
 		if (ret) {
 			dev_info(ctrl->ctrl.device,
 				"failed to initialize i/o queue: %d\n", ret);
@@ -1292,8 +1292,8 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
 		priv.hrqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
 		priv.hsqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
 	} else {
-		priv.hrqsize = cpu_to_le16(queue->queue_size);
-		priv.hsqsize = cpu_to_le16(queue->queue_size);
+		priv.hrqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize);
+		priv.hsqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize);
 	}
 
 	ret = rdma_connect(queue->cm_id, &param);
@@ -1818,7 +1818,7 @@ static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl)
 
 	memset(&ctrl->tag_set, 0, sizeof(ctrl->tag_set));
 	ctrl->tag_set.ops = &nvme_rdma_mq_ops;
-	ctrl->tag_set.queue_depth = ctrl->ctrl.sqsize;
+	ctrl->tag_set.queue_depth = ctrl->ctrl.sqsize + 1;
 	ctrl->tag_set.reserved_tags = 1; /* fabric connect */
 	ctrl->tag_set.numa_node = NUMA_NO_NODE;
 	ctrl->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
@@ -1916,7 +1916,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
 	spin_lock_init(&ctrl->lock);
 
 	ctrl->queue_count = opts->nr_io_queues + 1; /* +1 for admin queue */
-	ctrl->ctrl.sqsize = opts->queue_size;
+	ctrl->ctrl.sqsize = opts->queue_size - 1;
 	ctrl->ctrl.kato = opts->kato;
 
 	ret = -ENOMEM;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize
  2016-08-13  0:04 [PATCH v1 0/5] sqsize fixes Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 2/5] nvme-rdma: fix sqsize/hsqsize " Jay Freyensee
@ 2016-08-13  0:04 ` Jay Freyensee
  2016-08-15 15:54   ` Verkamp, Daniel
  2016-08-13  0:04 ` [PATCH v1 4/5] nvme-loop: set sqsize to 0-based value, per spec Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 5/5] nvme-rdma: adjust hrqsize to plus 1 Jay Freyensee
  4 siblings, 1 reply; 9+ messages in thread
From: Jay Freyensee @ 2016-08-13  0:04 UTC (permalink / raw)


Now that the host will be sending sqsize 0-based values,
the target need to be adjusted as well.

Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
---
 drivers/nvme/target/rdma.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index e06d504..d7cb98d 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1004,11 +1004,11 @@ nvmet_rdma_parse_cm_connect_req(struct rdma_conn_param *conn,
 	queue->host_qid = le16_to_cpu(req->qid);
 
 	/*
-	 * req->hsqsize corresponds to our recv queue size
-	 * req->hrqsize corresponds to our send queue size
+	 * req->hsqsize corresponds to our recv queue size plus 1
+	 * req->hrqsize corresponds to our send queue size plus 1
 	 */
-	queue->recv_queue_size = le16_to_cpu(req->hsqsize);
-	queue->send_queue_size = le16_to_cpu(req->hrqsize);
+	queue->recv_queue_size = le16_to_cpu(req->hsqsize + 1);
+	queue->send_queue_size = le16_to_cpu(req->hrqsize + 1);
 
 	if (!queue->host_qid && queue->recv_queue_size > NVMF_AQ_DEPTH)
 		return NVME_RDMA_CM_INVALID_HSQSIZE;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 4/5] nvme-loop: set sqsize to 0-based value, per spec
  2016-08-13  0:04 [PATCH v1 0/5] sqsize fixes Jay Freyensee
                   ` (2 preceding siblings ...)
  2016-08-13  0:04 ` [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize Jay Freyensee
@ 2016-08-13  0:04 ` Jay Freyensee
  2016-08-13  0:04 ` [PATCH v1 5/5] nvme-rdma: adjust hrqsize to plus 1 Jay Freyensee
  4 siblings, 0 replies; 9+ messages in thread
From: Jay Freyensee @ 2016-08-13  0:04 UTC (permalink / raw)


Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
---
 drivers/nvme/target/loop.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 94e7829..ce22d4b0 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -558,7 +558,7 @@ static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
 
 	memset(&ctrl->tag_set, 0, sizeof(ctrl->tag_set));
 	ctrl->tag_set.ops = &nvme_loop_mq_ops;
-	ctrl->tag_set.queue_depth = ctrl->ctrl.sqsize;
+	ctrl->tag_set.queue_depth = ctrl->ctrl.sqsize + 1;
 	ctrl->tag_set.reserved_tags = 1; /* fabric connect */
 	ctrl->tag_set.numa_node = NUMA_NO_NODE;
 	ctrl->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
@@ -622,7 +622,7 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
 
 	ret = -ENOMEM;
 
-	ctrl->ctrl.sqsize = opts->queue_size;
+	ctrl->ctrl.sqsize = opts->queue_size - 1;
 	ctrl->ctrl.kato = opts->kato;
 
 	ctrl->queues = kcalloc(opts->nr_io_queues + 1, sizeof(*ctrl->queues),
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 5/5] nvme-rdma: adjust hrqsize to plus 1
  2016-08-13  0:04 [PATCH v1 0/5] sqsize fixes Jay Freyensee
                   ` (3 preceding siblings ...)
  2016-08-13  0:04 ` [PATCH v1 4/5] nvme-loop: set sqsize to 0-based value, per spec Jay Freyensee
@ 2016-08-13  0:04 ` Jay Freyensee
  4 siblings, 0 replies; 9+ messages in thread
From: Jay Freyensee @ 2016-08-13  0:04 UTC (permalink / raw)


Currently under debate due to spec confusion, increase hrqsize
to one more than hsqsize.

Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
---
 drivers/nvme/host/rdma.c   | 4 ++--
 drivers/nvme/target/rdma.c | 7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 6aa913e..524c2b3 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1289,10 +1289,10 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
 	 * specified by the Fabrics standard.
 	 */
 	if (priv.qid == 0) {
-		priv.hrqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
+		priv.hrqsize = cpu_to_le16(NVMF_AQ_DEPTH);
 		priv.hsqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
 	} else {
-		priv.hrqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize);
+		priv.hrqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize + 1);
 		priv.hsqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize);
 	}
 
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index d7cb98d..60023d5 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1004,11 +1004,12 @@ nvmet_rdma_parse_cm_connect_req(struct rdma_conn_param *conn,
 	queue->host_qid = le16_to_cpu(req->qid);
 
 	/*
-	 * req->hsqsize corresponds to our recv queue size plus 1
-	 * req->hrqsize corresponds to our send queue size plus 1
+	 * req->hsqsize corresponds to our recv queue size plus 1 as
+	 * this value is based on sqsize, a 0-based value.
+	 * req->hrqsize corresponds to our send queue size plus 2
 	 */
 	queue->recv_queue_size = le16_to_cpu(req->hsqsize + 1);
-	queue->send_queue_size = le16_to_cpu(req->hrqsize + 1);
+	queue->send_queue_size = le16_to_cpu(req->hrqsize + 2);
 
 	if (!queue->host_qid && queue->recv_queue_size > NVMF_AQ_DEPTH)
 		return NVME_RDMA_CM_INVALID_HSQSIZE;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize
  2016-08-13  0:04 ` [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize Jay Freyensee
@ 2016-08-15 15:54   ` Verkamp, Daniel
  2016-08-15 16:15     ` J Freyensee
  0 siblings, 1 reply; 9+ messages in thread
From: Verkamp, Daniel @ 2016-08-15 15:54 UTC (permalink / raw)


On Fri, 2016-08-12@17:04 -0700, Jay Freyensee wrote:
> Now that the host will be sending sqsize 0-based values,
> the target need to be adjusted as well.
> 
> Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
> ---
> ?drivers/nvme/target/rdma.c | 8 ++++----
> ?1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index e06d504..d7cb98d 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -1004,11 +1004,11 @@ nvmet_rdma_parse_cm_connect_req(struct
> rdma_conn_param *conn,
> ?	queue->host_qid = le16_to_cpu(req->qid);
> ?
> ?	/*
> -	?* req->hsqsize corresponds to our recv queue size
> -	?* req->hrqsize corresponds to our send queue size
> +	?* req->hsqsize corresponds to our recv queue size plus 1
> +	?* req->hrqsize corresponds to our send queue size plus 1
> ?	?*/
> -	queue->recv_queue_size = le16_to_cpu(req->hsqsize);
> -	queue->send_queue_size = le16_to_cpu(req->hrqsize);
> +	queue->recv_queue_size = le16_to_cpu(req->hsqsize + 1);
> +	queue->send_queue_size = le16_to_cpu(req->hrqsize + 1);

I think the + 1 needs to be outside le16_to_cpu() here so the math is
done in the native CPU byte order.

-- Daniel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec
  2016-08-13  0:04 ` [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec Jay Freyensee
@ 2016-08-15 15:55   ` Verkamp, Daniel
  0 siblings, 0 replies; 9+ messages in thread
From: Verkamp, Daniel @ 2016-08-15 15:55 UTC (permalink / raw)


On Fri, 2016-08-12@17:04 -0700, Jay Freyensee wrote:
> Upon admin queue connect(), the rdma qp was being
> set based on NVMF_AQ_DEPTH.??However, the fabrics layer was
> using the sqsize field value set for I/O queues for the admin
> queue, which through the nvme layer and rdma layer off-whack:

Minor grammar nit: "through" -> "threw".

Otherwise looks OK to me (given the following patches that fix up the
non-AQ case are also applied).

Thanks,
-- Daniel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize
  2016-08-15 15:54   ` Verkamp, Daniel
@ 2016-08-15 16:15     ` J Freyensee
  0 siblings, 0 replies; 9+ messages in thread
From: J Freyensee @ 2016-08-15 16:15 UTC (permalink / raw)


On Mon, 2016-08-15@15:54 +0000, Verkamp, Daniel wrote:
> On Fri, 2016-08-12@17:04 -0700, Jay Freyensee wrote:
> > 
> > Now that the host will be sending sqsize 0-based values,
> > the target need to be adjusted as well.
> > 
> > Signed-off-by: Jay Freyensee <james_p_freyensee at linux.intel.com>
> > ---
> > ?drivers/nvme/target/rdma.c | 8 ++++----
> > ?1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/nvme/target/rdma.c
> > b/drivers/nvme/target/rdma.c
> > index e06d504..d7cb98d 100644
> > --- a/drivers/nvme/target/rdma.c
> > +++ b/drivers/nvme/target/rdma.c
> > @@ -1004,11 +1004,11 @@ nvmet_rdma_parse_cm_connect_req(struct
> > rdma_conn_param *conn,
> > ?	queue->host_qid = le16_to_cpu(req->qid);
> > ?
> > ?	/*
> > -	?* req->hsqsize corresponds to our recv queue size
> > -	?* req->hrqsize corresponds to our send queue size
> > +	?* req->hsqsize corresponds to our recv queue size plus 1
> > +	?* req->hrqsize corresponds to our send queue size plus 1
> > ?	?*/
> > -	queue->recv_queue_size = le16_to_cpu(req->hsqsize);
> > -	queue->send_queue_size = le16_to_cpu(req->hrqsize);
> > +	queue->recv_queue_size = le16_to_cpu(req->hsqsize + 1);
> > +	queue->send_queue_size = le16_to_cpu(req->hrqsize + 1);
> 
> I think the + 1 needs to be outside le16_to_cpu() here so the math is
> done in the native CPU byte order.

Yah, I agree, hrqsize/hsqsize is coming off the wire then we are
setting it to an internal variable. ?Fix coming shortly.

> 
> -- Daniel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-08-15 16:15 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-13  0:04 [PATCH v1 0/5] sqsize fixes Jay Freyensee
2016-08-13  0:04 ` [PATCH v1 1/5] fabrics: define admin sqsize min default, per spec Jay Freyensee
2016-08-15 15:55   ` Verkamp, Daniel
2016-08-13  0:04 ` [PATCH v1 2/5] nvme-rdma: fix sqsize/hsqsize " Jay Freyensee
2016-08-13  0:04 ` [PATCH v1 3/5] nvmet-rdma: +1 to *queue_size from hsqsize/hrqsize Jay Freyensee
2016-08-15 15:54   ` Verkamp, Daniel
2016-08-15 16:15     ` J Freyensee
2016-08-13  0:04 ` [PATCH v1 4/5] nvme-loop: set sqsize to 0-based value, per spec Jay Freyensee
2016-08-13  0:04 ` [PATCH v1 5/5] nvme-rdma: adjust hrqsize to plus 1 Jay Freyensee

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.