Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH] nvme-fc: set max_segments to lldd max value
@ 2020-07-14 19:03 James Smart
  2020-07-15  9:04 ` Max Gurtovoy
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: James Smart @ 2020-07-14 19:03 UTC (permalink / raw)
  To: linux-nvme; +Cc: Max Gurtovoy, James Smart, Ewan D . Milne

Currently the FC transport is set max_hw_sectors based on the lldds
max sgl segment count. However, the block queue max segments is
set based on the controller's max_segments count, which the transport
does not set.  As such, the lldd is receiving sgl lists that are
exceeding its max segment count.

Set the controller max segment count and derive max_hw_sectors from
the max segment count.

Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Ewan D. Milne <emilne@redhat.com>
CC: Max Gurtovoy <maxg@mellanox.com>

---
Looks like the setting of max_segments has been missing from all
fabric transports for a while. Rdma had a fixup (ff13c1b87c97) from
Max last fall that corrected this. But tcp and fc were still lacking.
Copying Max to look at tcp.
---
 drivers/nvme/host/fc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 6aa30bb5a762..e57e536546f7 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3001,8 +3001,9 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 	if (ret)
 		goto out_disconnect_admin_queue;
 
-	ctrl->ctrl.max_hw_sectors =
-		(ctrl->lport->ops->max_sgl_segments - 1) << (PAGE_SHIFT - 9);
+	ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;
+	ctrl->ctrl.max_hw_sectors = ctrl->ctrl.max_segments <<
+						(ilog2(SZ_4K) - 9);
 
 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 
-- 
2.26.2


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-fc: set max_segments to lldd max value
  2020-07-14 19:03 [PATCH] nvme-fc: set max_segments to lldd max value James Smart
@ 2020-07-15  9:04 ` Max Gurtovoy
  2020-07-20 19:56   ` Sagi Grimberg
  2020-07-20 21:11 ` Himanshu Madhani
  2020-07-26 15:43 ` Christoph Hellwig
  2 siblings, 1 reply; 5+ messages in thread
From: Max Gurtovoy @ 2020-07-15  9:04 UTC (permalink / raw)
  To: James Smart, linux-nvme, Sagi Grimberg; +Cc: Ewan D . Milne


On 7/14/2020 10:03 PM, James Smart wrote:
> Currently the FC transport is set max_hw_sectors based on the lldds
> max sgl segment count. However, the block queue max segments is
> set based on the controller's max_segments count, which the transport
> does not set.  As such, the lldd is receiving sgl lists that are
> exceeding its max segment count.
>
> Set the controller max segment count and derive max_hw_sectors from
> the max segment count.
>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Ewan D. Milne <emilne@redhat.com>
> CC: Max Gurtovoy <maxg@mellanox.com>
>
> ---
> Looks like the setting of max_segments has been missing from all
> fabric transports for a while. Rdma had a fixup (ff13c1b87c97) from
> Max last fall that corrected this. But tcp and fc were still lacking.
> Copying Max to look at tcp.

The TCP stack is not limiting the IO size, I guess because of the fact 
it's a streaming protocol.

So I don't see a bug in the tcp driver but we can consider limiting the 
IO size if it will improve performance in some cases.

BTW, AFAIK the iscsi TCP acts the same.

Sagi,

do you see some benefits we can get by limiting the mdts for tcp ?


> ---
>   drivers/nvme/host/fc.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 6aa30bb5a762..e57e536546f7 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -3001,8 +3001,9 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>   	if (ret)
>   		goto out_disconnect_admin_queue;
>   
> -	ctrl->ctrl.max_hw_sectors =
> -		(ctrl->lport->ops->max_sgl_segments - 1) << (PAGE_SHIFT - 9);
> +	ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;
> +	ctrl->ctrl.max_hw_sectors = ctrl->ctrl.max_segments <<
> +						(ilog2(SZ_4K) - 9);
>   
>   	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>   

Looks good,

Reviewed-by: Max Gurtovoy <maxg@mellanox.com>



_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-fc: set max_segments to lldd max value
  2020-07-15  9:04 ` Max Gurtovoy
@ 2020-07-20 19:56   ` Sagi Grimberg
  0 siblings, 0 replies; 5+ messages in thread
From: Sagi Grimberg @ 2020-07-20 19:56 UTC (permalink / raw)
  To: Max Gurtovoy, James Smart, linux-nvme; +Cc: Ewan D . Milne


>> Currently the FC transport is set max_hw_sectors based on the lldds
>> max sgl segment count. However, the block queue max segments is
>> set based on the controller's max_segments count, which the transport
>> does not set.  As such, the lldd is receiving sgl lists that are
>> exceeding its max segment count.
>>
>> Set the controller max segment count and derive max_hw_sectors from
>> the max segment count.
>>
>> Signed-off-by: James Smart <jsmart2021@gmail.com>
>> Reviewed-by: Ewan D. Milne <emilne@redhat.com>
>> CC: Max Gurtovoy <maxg@mellanox.com>
>>
>> ---
>> Looks like the setting of max_segments has been missing from all
>> fabric transports for a while. Rdma had a fixup (ff13c1b87c97) from
>> Max last fall that corrected this. But tcp and fc were still lacking.
>> Copying Max to look at tcp.
> 
> The TCP stack is not limiting the IO size, I guess because of the fact 
> it's a streaming protocol.
> 
> So I don't see a bug in the tcp driver but we can consider limiting the 
> IO size if it will improve performance in some cases.
> 
> BTW, AFAIK the iscsi TCP acts the same.
> 
> Sagi,
> 
> do you see some benefits we can get by limiting the mdts for tcp ?

Not really...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-fc: set max_segments to lldd max value
  2020-07-14 19:03 [PATCH] nvme-fc: set max_segments to lldd max value James Smart
  2020-07-15  9:04 ` Max Gurtovoy
@ 2020-07-20 21:11 ` Himanshu Madhani
  2020-07-26 15:43 ` Christoph Hellwig
  2 siblings, 0 replies; 5+ messages in thread
From: Himanshu Madhani @ 2020-07-20 21:11 UTC (permalink / raw)
  To: James Smart, linux-nvme; +Cc: Max Gurtovoy, Ewan D . Milne



On 7/14/20 2:03 PM, James Smart wrote:
> Currently the FC transport is set max_hw_sectors based on the lldds
> max sgl segment count. However, the block queue max segments is
> set based on the controller's max_segments count, which the transport
> does not set.  As such, the lldd is receiving sgl lists that are
> exceeding its max segment count.
> 
> Set the controller max segment count and derive max_hw_sectors from
> the max segment count.
> 
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Ewan D. Milne <emilne@redhat.com>
> CC: Max Gurtovoy <maxg@mellanox.com>
> 
> ---
> Looks like the setting of max_segments has been missing from all
> fabric transports for a while. Rdma had a fixup (ff13c1b87c97) from
> Max last fall that corrected this. But tcp and fc were still lacking.
> Copying Max to look at tcp.
> ---
>   drivers/nvme/host/fc.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 6aa30bb5a762..e57e536546f7 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -3001,8 +3001,9 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>   	if (ret)
>   		goto out_disconnect_admin_queue;
>   
> -	ctrl->ctrl.max_hw_sectors =
> -		(ctrl->lport->ops->max_sgl_segments - 1) << (PAGE_SHIFT - 9);
> +	ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;
> +	ctrl->ctrl.max_hw_sectors = ctrl->ctrl.max_segments <<
> +						(ilog2(SZ_4K) - 9);
>   
>   	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>   
> 

Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>

-- 
Himanshu Madhani                     Oracle Linux Engineering

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-fc: set max_segments to lldd max value
  2020-07-14 19:03 [PATCH] nvme-fc: set max_segments to lldd max value James Smart
  2020-07-15  9:04 ` Max Gurtovoy
  2020-07-20 21:11 ` Himanshu Madhani
@ 2020-07-26 15:43 ` Christoph Hellwig
  2 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2020-07-26 15:43 UTC (permalink / raw)
  To: James Smart; +Cc: Max Gurtovoy, Ewan D . Milne, linux-nvme

Applied to nvme-5.9, thanks.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, back to index

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-14 19:03 [PATCH] nvme-fc: set max_segments to lldd max value James Smart
2020-07-15  9:04 ` Max Gurtovoy
2020-07-20 19:56   ` Sagi Grimberg
2020-07-20 21:11 ` Himanshu Madhani
2020-07-26 15:43 ` Christoph Hellwig

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git