All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Max Gurtovoy <mgurtovoy@nvidia.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-nvme@lists.infradead.org, kbusch@kernel.org,
	chaitanyak@nvidia.com, israelr@nvidia.com,
	mruijter@primelogic.nl, oren@nvidia.com, nitzanc@nvidia.com
Subject: Re: [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op
Date: Wed, 22 Sep 2021 09:10:14 -0300	[thread overview]
Message-ID: <20210922121014.GF327412@nvidia.com> (raw)
In-Reply-To: <6df58d37-0519-9e97-bae5-a529084c8341@grimberg.me>

On Wed, Sep 22, 2021 at 12:18:15PM +0300, Sagi Grimberg wrote:

> Can't you do this in rdmw_rw? all of the users of it will need the
> exact same value right?

No, it depends on what ops the user is going to use.
 
> > is it necessary for this submission or can we live with 128 depth for
> > now ? with and without new ib_ API the queue depth will be in these
> > sizes.
> 
> I am not sure I see the entire complexity. Even if this calc is not
> accurate, you are already proposing to hard-code it to 128, so you
> can do this to account for the boundaries there.

As I understood it the 128 is to match what the initiator hardcodes
its limit to - both sides have the same basic problem with allocating
the RDMA QP, they just had different hard coded limits. Due to this we
know that 128 is OK for all RDMA HW as the initiator has proven it
already.

For a stable fix to the interop problem this is a good approach.

If someone wants to add all sorts of complexity to try and figure out
the actual device specific limit then they should probably also show
that there is a performance win (or at least not a loss) to increasing
this number further..

Jason

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-09-22 12:10 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21 19:04 [PATCH v1 0/2] update RDMA controllers queue depth Max Gurtovoy
2021-09-21 19:04 ` [PATCH 1/2] nvmet: add get_queue_size op for controllers Max Gurtovoy
2021-09-21 19:20   ` Chaitanya Kulkarni
2021-09-21 22:47   ` Sagi Grimberg
2021-09-22  7:35     ` Max Gurtovoy
2021-09-21 19:04 ` [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op Max Gurtovoy
2021-09-21 19:21   ` Chaitanya Kulkarni
2021-09-21 22:52   ` Sagi Grimberg
2021-09-22  7:44     ` Max Gurtovoy
2021-09-22  7:45       ` Christoph Hellwig
2021-09-22  8:10         ` Max Gurtovoy
2021-09-22  9:18           ` Sagi Grimberg
2021-09-22  9:35             ` Max Gurtovoy
2021-09-22 12:10             ` Jason Gunthorpe [this message]
2021-09-22 12:57               ` Max Gurtovoy
2021-09-22 13:31                 ` Jason Gunthorpe
2021-09-22 14:00                   ` Max Gurtovoy
2021-09-21 19:22 ` [PATCH v1 0/2] update RDMA controllers queue depth Chaitanya Kulkarni
2021-09-21 19:42   ` Mark Ruijter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210922121014.GF327412@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mgurtovoy@nvidia.com \
    --cc=mruijter@primelogic.nl \
    --cc=nitzanc@nvidia.com \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.