All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 1/3] nvmet: Add get_mdts op for controllers
@ 2020-03-05  9:55 Max Gurtovoy
  2020-03-05  9:55 ` [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op Max Gurtovoy
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Max Gurtovoy @ 2020-03-05  9:55 UTC (permalink / raw)
  To: jgg, linux-nvme, sagi, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, Max Gurtovoy, bharat, nirranjan, bvanassche

Some transports, such as RDMA, would like to set the Maximum Data
Transfer Size (MDTS) according to device/port/ctrl characteristics.
This will enable the transport to set the optimal MDTS according to
controller needs and device capabilities. Add a new nvmet transport
op that is called during ctrl identification. This will not effect
transports that don't implement this option. The return value of the new
op is according to the NVMe spec definition for MDTS.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
---

changes from V1:
 - renamed set_mdts to get_mdts (Bart)
 - added const for ctrl argument (Bart)
 - updated commit message to explain return value of the new op (Bart)

---
 drivers/nvme/target/admin-cmd.c | 8 ++++++--
 drivers/nvme/target/nvmet.h     | 1 +
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index c0aa9c3..b9ec489 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -369,8 +369,12 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 	/* we support multiple ports, multiples hosts and ANA: */
 	id->cmic = (1 << 0) | (1 << 1) | (1 << 3);
 
-	/* no limit on data transfer sizes for now */
-	id->mdts = 0;
+	/* Limit MDTS according to transport capability */
+	if (ctrl->ops->get_mdts)
+		id->mdts = ctrl->ops->get_mdts(ctrl);
+	else
+		id->mdts = 0;
+
 	id->cntlid = cpu_to_le16(ctrl->cntlid);
 	id->ver = cpu_to_le32(ctrl->subsys->ver);
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 42ba2dd..421dff3 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -289,6 +289,7 @@ struct nvmet_fabrics_ops {
 			struct nvmet_port *port, char *traddr);
 	u16 (*install_queue)(struct nvmet_sq *nvme_sq);
 	void (*discovery_chg)(struct nvmet_port *port);
+	u8 (*get_mdts)(const struct nvmet_ctrl *ctrl);
 };
 
 #define NVMET_MAX_INLINE_BIOVEC	8
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op
  2020-03-05  9:55 [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Max Gurtovoy
@ 2020-03-05  9:55 ` Max Gurtovoy
  2020-03-05 20:44   ` Sagi Grimberg
  2020-03-05  9:55 ` [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts Max Gurtovoy
  2020-03-05 20:40 ` [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Sagi Grimberg
  2 siblings, 1 reply; 9+ messages in thread
From: Max Gurtovoy @ 2020-03-05  9:55 UTC (permalink / raw)
  To: jgg, linux-nvme, sagi, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, Max Gurtovoy, bharat, nirranjan, bvanassche

Set the maximal data transfer size to be 1MB (currently mdts is
unlimited). This will allow calculating the amount of MR's that
one ctrl should allocate to fulfill it's capabilities.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---

changes from V1:
 - renamed nvmet_rdma_set_mdts to nvmet_rdma_get_mdts
 - align to get_mdts callback changes

---
 drivers/nvme/target/rdma.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 37d262a..d12ef0d 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -30,6 +30,7 @@
 #define NVMET_RDMA_DEFAULT_INLINE_DATA_SIZE	PAGE_SIZE
 #define NVMET_RDMA_MAX_INLINE_SGE		4
 #define NVMET_RDMA_MAX_INLINE_DATA_SIZE		max_t(int, SZ_16K, PAGE_SIZE)
+#define NVMET_RDMA_MAX_MDTS			8
 
 struct nvmet_rdma_cmd {
 	struct ib_sge		sge[NVMET_RDMA_MAX_INLINE_SGE + 1];
@@ -1602,6 +1603,12 @@ static void nvmet_rdma_disc_port_addr(struct nvmet_req *req,
 	}
 }
 
+static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
+{
+	/* Assume mpsmin == device_page_size == 4KB */
+	return NVMET_RDMA_MAX_MDTS;
+}
+
 static const struct nvmet_fabrics_ops nvmet_rdma_ops = {
 	.owner			= THIS_MODULE,
 	.type			= NVMF_TRTYPE_RDMA,
@@ -1612,6 +1619,7 @@ static void nvmet_rdma_disc_port_addr(struct nvmet_req *req,
 	.queue_response		= nvmet_rdma_queue_response,
 	.delete_ctrl		= nvmet_rdma_delete_ctrl,
 	.disc_traddr		= nvmet_rdma_disc_port_addr,
+	.get_mdts		= nvmet_rdma_get_mdts,
 };
 
 static void nvmet_rdma_remove_one(struct ib_device *ib_device, void *client_data)
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts
  2020-03-05  9:55 [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Max Gurtovoy
  2020-03-05  9:55 ` [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op Max Gurtovoy
@ 2020-03-05  9:55 ` Max Gurtovoy
  2020-03-05 20:49   ` Sagi Grimberg
  2020-03-05 20:40 ` [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Sagi Grimberg
  2 siblings, 1 reply; 9+ messages in thread
From: Max Gurtovoy @ 2020-03-05  9:55 UTC (permalink / raw)
  To: jgg, linux-nvme, sagi, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, Max Gurtovoy, bharat, nirranjan, bvanassche

Current nvmet-rdma code allocates MR pool budget based on queue size,
assuming both host and target use the same "max_pages_per_mr" count.
After limiting the mdts value for RDMA controllers, we know the factor
of maximum MR's per IO operation. Thus, make sure MR pool will be
sufficient for the required IO depth and IO size.

That is, say host's SQ size is 100, then the MR pool budget allocated
currently at target will also be 100 MRs. But 100 IO WRITE Requests
with 256 sg_count(IO size above 1MB) require 200 MRs when target's
"max_pages_per_mr" is 128.

Reported-by: Krishnamraju Eraparaju <krishna2@chelsio.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---

changes from V1:
 - added "Reviewed-by" signature (Christoph)

---
 drivers/nvme/target/rdma.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index d12ef0d..daab656 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -976,7 +976,7 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
 {
 	struct ib_qp_init_attr qp_attr;
 	struct nvmet_rdma_device *ndev = queue->dev;
-	int comp_vector, nr_cqe, ret, i;
+	int comp_vector, nr_cqe, ret, i, factor;
 
 	/*
 	 * Spread the io queues across completion vectors,
@@ -1009,7 +1009,9 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
 	qp_attr.qp_type = IB_QPT_RC;
 	/* +1 for drain */
 	qp_attr.cap.max_send_wr = queue->send_queue_size + 1;
-	qp_attr.cap.max_rdma_ctxs = queue->send_queue_size;
+	factor = rdma_rw_mr_factor(ndev->device, queue->cm_id->port_num,
+				   1 << NVMET_RDMA_MAX_MDTS);
+	qp_attr.cap.max_rdma_ctxs = queue->send_queue_size * factor;
 	qp_attr.cap.max_send_sge = max(ndev->device->attrs.max_sge_rd,
 					ndev->device->attrs.max_send_sge);
 
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 1/3] nvmet: Add get_mdts op for controllers
  2020-03-05  9:55 [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Max Gurtovoy
  2020-03-05  9:55 ` [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op Max Gurtovoy
  2020-03-05  9:55 ` [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts Max Gurtovoy
@ 2020-03-05 20:40 ` Sagi Grimberg
  2 siblings, 0 replies; 9+ messages in thread
From: Sagi Grimberg @ 2020-03-05 20:40 UTC (permalink / raw)
  To: Max Gurtovoy, jgg, linux-nvme, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, bharat, nirranjan, bvanassche

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op
  2020-03-05  9:55 ` [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op Max Gurtovoy
@ 2020-03-05 20:44   ` Sagi Grimberg
  2020-03-05 23:01     ` Max Gurtovoy
  0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2020-03-05 20:44 UTC (permalink / raw)
  To: Max Gurtovoy, jgg, linux-nvme, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, bharat, nirranjan, bvanassche



On 3/5/20 1:55 AM, Max Gurtovoy wrote:
> Set the maximal data transfer size to be 1MB (currently mdts is
> unlimited). This will allow calculating the amount of MR's that
> one ctrl should allocate to fulfill it's capabilities.
> 
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> ---
> 
> changes from V1:
>   - renamed nvmet_rdma_set_mdts to nvmet_rdma_get_mdts
>   - align to get_mdts callback changes
> 
> ---
>   drivers/nvme/target/rdma.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 37d262a..d12ef0d 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -30,6 +30,7 @@
>   #define NVMET_RDMA_DEFAULT_INLINE_DATA_SIZE	PAGE_SIZE
>   #define NVMET_RDMA_MAX_INLINE_SGE		4
>   #define NVMET_RDMA_MAX_INLINE_DATA_SIZE		max_t(int, SZ_16K, PAGE_SIZE)
> +#define NVMET_RDMA_MAX_MDTS			8
>   
>   struct nvmet_rdma_cmd {
>   	struct ib_sge		sge[NVMET_RDMA_MAX_INLINE_SGE + 1];
> @@ -1602,6 +1603,12 @@ static void nvmet_rdma_disc_port_addr(struct nvmet_req *req,
>   	}
>   }
>   
> +static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
> +{
> +	/* Assume mpsmin == device_page_size == 4KB */

This comment should come at the define entry.

> +	return NVMET_RDMA_MAX_MDTS;
> +}
> +
>   static const struct nvmet_fabrics_ops nvmet_rdma_ops = {
>   	.owner			= THIS_MODULE,
>   	.type			= NVMF_TRTYPE_RDMA,
> @@ -1612,6 +1619,7 @@ static void nvmet_rdma_disc_port_addr(struct nvmet_req *req,
>   	.queue_response		= nvmet_rdma_queue_response,
>   	.delete_ctrl		= nvmet_rdma_delete_ctrl,
>   	.disc_traddr		= nvmet_rdma_disc_port_addr,
> +	.get_mdts		= nvmet_rdma_get_mdts,
>   };
>   
>   static void nvmet_rdma_remove_one(struct ib_device *ib_device, void *client_data)
> 

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts
  2020-03-05  9:55 ` [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts Max Gurtovoy
@ 2020-03-05 20:49   ` Sagi Grimberg
  2020-03-05 22:59     ` Max Gurtovoy
  0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2020-03-05 20:49 UTC (permalink / raw)
  To: Max Gurtovoy, jgg, linux-nvme, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, bharat, nirranjan, bvanassche


> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index d12ef0d..daab656 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -976,7 +976,7 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
>   {
>   	struct ib_qp_init_attr qp_attr;
>   	struct nvmet_rdma_device *ndev = queue->dev;
> -	int comp_vector, nr_cqe, ret, i;
> +	int comp_vector, nr_cqe, ret, i, factor;
>   
>   	/*
>   	 * Spread the io queues across completion vectors,
> @@ -1009,7 +1009,9 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
>   	qp_attr.qp_type = IB_QPT_RC;
>   	/* +1 for drain */
>   	qp_attr.cap.max_send_wr = queue->send_queue_size + 1;
> -	qp_attr.cap.max_rdma_ctxs = queue->send_queue_size;
> +	factor = rdma_rw_mr_factor(ndev->device, queue->cm_id->port_num,
> +				   1 << NVMET_RDMA_MAX_MDTS);

Maybe I'm missing something, but aren't you missing the mpsmin
multiplier? your maxpages is not (1 << 8) but rather (1 << 20) isn't it?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts
  2020-03-05 20:49   ` Sagi Grimberg
@ 2020-03-05 22:59     ` Max Gurtovoy
  2020-03-06  1:19       ` Sagi Grimberg
  0 siblings, 1 reply; 9+ messages in thread
From: Max Gurtovoy @ 2020-03-05 22:59 UTC (permalink / raw)
  To: Sagi Grimberg, jgg, linux-nvme, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, bharat, nirranjan, bvanassche


On 3/5/2020 10:49 PM, Sagi Grimberg wrote:
>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index d12ef0d..daab656 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -976,7 +976,7 @@ static int nvmet_rdma_create_queue_ib(struct 
>> nvmet_rdma_queue *queue)
>>   {
>>       struct ib_qp_init_attr qp_attr;
>>       struct nvmet_rdma_device *ndev = queue->dev;
>> -    int comp_vector, nr_cqe, ret, i;
>> +    int comp_vector, nr_cqe, ret, i, factor;
>>         /*
>>        * Spread the io queues across completion vectors,
>> @@ -1009,7 +1009,9 @@ static int nvmet_rdma_create_queue_ib(struct 
>> nvmet_rdma_queue *queue)
>>       qp_attr.qp_type = IB_QPT_RC;
>>       /* +1 for drain */
>>       qp_attr.cap.max_send_wr = queue->send_queue_size + 1;
>> -    qp_attr.cap.max_rdma_ctxs = queue->send_queue_size;
>> +    factor = rdma_rw_mr_factor(ndev->device, queue->cm_id->port_num,
>> +                   1 << NVMET_RDMA_MAX_MDTS);
>
> Maybe I'm missing something, but aren't you missing the mpsmin
> multiplier? your maxpages is not (1 << 8) but rather (1 << 20) isn't it?

why ?

we support 256 pages with 4KB size each to get 1MB "mdts".

The factor is in units of RW api context (that might limit the size of 
the MRs according to some logic).



_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op
  2020-03-05 20:44   ` Sagi Grimberg
@ 2020-03-05 23:01     ` Max Gurtovoy
  0 siblings, 0 replies; 9+ messages in thread
From: Max Gurtovoy @ 2020-03-05 23:01 UTC (permalink / raw)
  To: Sagi Grimberg, jgg, linux-nvme, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, bharat, nirranjan, bvanassche


On 3/5/2020 10:44 PM, Sagi Grimberg wrote:
>
>
> On 3/5/20 1:55 AM, Max Gurtovoy wrote:
>> Set the maximal data transfer size to be 1MB (currently mdts is
>> unlimited). This will allow calculating the amount of MR's that
>> one ctrl should allocate to fulfill it's capabilities.
>>
>> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
>> ---
>>
>> changes from V1:
>>   - renamed nvmet_rdma_set_mdts to nvmet_rdma_get_mdts
>>   - align to get_mdts callback changes
>>
>> ---
>>   drivers/nvme/target/rdma.c | 8 ++++++++
>>   1 file changed, 8 insertions(+)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index 37d262a..d12ef0d 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -30,6 +30,7 @@
>>   #define NVMET_RDMA_DEFAULT_INLINE_DATA_SIZE    PAGE_SIZE
>>   #define NVMET_RDMA_MAX_INLINE_SGE        4
>>   #define NVMET_RDMA_MAX_INLINE_DATA_SIZE        max_t(int, SZ_16K, 
>> PAGE_SIZE)
>> +#define NVMET_RDMA_MAX_MDTS            8
>>     struct nvmet_rdma_cmd {
>>       struct ib_sge        sge[NVMET_RDMA_MAX_INLINE_SGE + 1];
>> @@ -1602,6 +1603,12 @@ static void nvmet_rdma_disc_port_addr(struct 
>> nvmet_req *req,
>>       }
>>   }
>>   +static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
>> +{
>> +    /* Assume mpsmin == device_page_size == 4KB */
>
> This comment should come at the define entry.


Sure, no problem.


>
>> +    return NVMET_RDMA_MAX_MDTS;
>> +}
>> +
>>   static const struct nvmet_fabrics_ops nvmet_rdma_ops = {
>>       .owner            = THIS_MODULE,
>>       .type            = NVMF_TRTYPE_RDMA,
>> @@ -1612,6 +1619,7 @@ static void nvmet_rdma_disc_port_addr(struct 
>> nvmet_req *req,
>>       .queue_response        = nvmet_rdma_queue_response,
>>       .delete_ctrl        = nvmet_rdma_delete_ctrl,
>>       .disc_traddr        = nvmet_rdma_disc_port_addr,
>> +    .get_mdts        = nvmet_rdma_get_mdts,
>>   };
>>     static void nvmet_rdma_remove_one(struct ib_device *ib_device, 
>> void *client_data)
>>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts
  2020-03-05 22:59     ` Max Gurtovoy
@ 2020-03-06  1:19       ` Sagi Grimberg
  0 siblings, 0 replies; 9+ messages in thread
From: Sagi Grimberg @ 2020-03-06  1:19 UTC (permalink / raw)
  To: Max Gurtovoy, jgg, linux-nvme, hch, kbusch, Chaitanya.Kulkarni
  Cc: krishna2, bharat, nirranjan, bvanassche


>>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>>> index d12ef0d..daab656 100644
>>> --- a/drivers/nvme/target/rdma.c
>>> +++ b/drivers/nvme/target/rdma.c
>>> @@ -976,7 +976,7 @@ static int nvmet_rdma_create_queue_ib(struct 
>>> nvmet_rdma_queue *queue)
>>>   {
>>>       struct ib_qp_init_attr qp_attr;
>>>       struct nvmet_rdma_device *ndev = queue->dev;
>>> -    int comp_vector, nr_cqe, ret, i;
>>> +    int comp_vector, nr_cqe, ret, i, factor;
>>>         /*
>>>        * Spread the io queues across completion vectors,
>>> @@ -1009,7 +1009,9 @@ static int nvmet_rdma_create_queue_ib(struct 
>>> nvmet_rdma_queue *queue)
>>>       qp_attr.qp_type = IB_QPT_RC;
>>>       /* +1 for drain */
>>>       qp_attr.cap.max_send_wr = queue->send_queue_size + 1;
>>> -    qp_attr.cap.max_rdma_ctxs = queue->send_queue_size;
>>> +    factor = rdma_rw_mr_factor(ndev->device, queue->cm_id->port_num,
>>> +                   1 << NVMET_RDMA_MAX_MDTS);
>>
>> Maybe I'm missing something, but aren't you missing the mpsmin
>> multiplier? your maxpages is not (1 << 8) but rather (1 << 20) isn't it?
> 
> why ?
> 
> we support 256 pages with 4KB size each to get 1MB "mdts".
> 
> The factor is in units of RW api context (that might limit the size of 
> the MRs according to some logic).

You're right, never mind...

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-06  1:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-05  9:55 [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Max Gurtovoy
2020-03-05  9:55 ` [PATCH V2 2/3] nvmet-rdma: Implement get_mdts controller op Max Gurtovoy
2020-03-05 20:44   ` Sagi Grimberg
2020-03-05 23:01     ` Max Gurtovoy
2020-03-05  9:55 ` [PATCH V2 3/3] nvmet-rdma: allocate RW ctxs according to mdts Max Gurtovoy
2020-03-05 20:49   ` Sagi Grimberg
2020-03-05 22:59     ` Max Gurtovoy
2020-03-06  1:19       ` Sagi Grimberg
2020-03-05 20:40 ` [PATCH V2 1/3] nvmet: Add get_mdts op for controllers Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.