linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Max Gurtovoy <maxg@mellanox.com>
To: James Smart <jsmart2021@gmail.com>,
	linux-nvme@lists.infradead.org, Sagi Grimberg <sagi@grimberg.me>
Cc: "Ewan D . Milne" <emilne@redhat.com>
Subject: Re: [PATCH] nvme-fc: set max_segments to lldd max value
Date: Wed, 15 Jul 2020 12:04:58 +0300	[thread overview]
Message-ID: <75927cec-e607-0293-47d3-f8b6141425f0@mellanox.com> (raw)
In-Reply-To: <20200714190336.119013-1-jsmart2021@gmail.com>


On 7/14/2020 10:03 PM, James Smart wrote:
> Currently the FC transport is set max_hw_sectors based on the lldds
> max sgl segment count. However, the block queue max segments is
> set based on the controller's max_segments count, which the transport
> does not set.  As such, the lldd is receiving sgl lists that are
> exceeding its max segment count.
>
> Set the controller max segment count and derive max_hw_sectors from
> the max segment count.
>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Ewan D. Milne <emilne@redhat.com>
> CC: Max Gurtovoy <maxg@mellanox.com>
>
> ---
> Looks like the setting of max_segments has been missing from all
> fabric transports for a while. Rdma had a fixup (ff13c1b87c97) from
> Max last fall that corrected this. But tcp and fc were still lacking.
> Copying Max to look at tcp.

The TCP stack is not limiting the IO size, I guess because of the fact 
it's a streaming protocol.

So I don't see a bug in the tcp driver but we can consider limiting the 
IO size if it will improve performance in some cases.

BTW, AFAIK the iscsi TCP acts the same.

Sagi,

do you see some benefits we can get by limiting the mdts for tcp ?


> ---
>   drivers/nvme/host/fc.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 6aa30bb5a762..e57e536546f7 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -3001,8 +3001,9 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>   	if (ret)
>   		goto out_disconnect_admin_queue;
>   
> -	ctrl->ctrl.max_hw_sectors =
> -		(ctrl->lport->ops->max_sgl_segments - 1) << (PAGE_SHIFT - 9);
> +	ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;
> +	ctrl->ctrl.max_hw_sectors = ctrl->ctrl.max_segments <<
> +						(ilog2(SZ_4K) - 9);
>   
>   	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>   

Looks good,

Reviewed-by: Max Gurtovoy <maxg@mellanox.com>



_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-07-15  9:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-14 19:03 [PATCH] nvme-fc: set max_segments to lldd max value James Smart
2020-07-15  9:04 ` Max Gurtovoy [this message]
2020-07-20 19:56   ` Sagi Grimberg
2020-07-20 21:11 ` Himanshu Madhani
2020-07-26 15:43 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=75927cec-e607-0293-47d3-f8b6141425f0@mellanox.com \
    --to=maxg@mellanox.com \
    --cc=emilne@redhat.com \
    --cc=jsmart2021@gmail.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).