All of lore.kernel.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>,
	<linux-nvme@lists.infradead.org>, <kbusch@kernel.org>,
	<chaitanyak@nvidia.com>, <israelr@nvidia.com>,
	<mruijter@primelogic.nl>, <oren@nvidia.com>, <nitzanc@nvidia.com>,
	"Jason Gunthorpe" <jgg@nvidia.com>
Subject: Re: [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op
Date: Wed, 22 Sep 2021 11:10:28 +0300	[thread overview]
Message-ID: <01762668-1572-9f37-73e8-714d4ff23323@nvidia.com> (raw)
In-Reply-To: <20210922074550.GA16099@lst.de>


On 9/22/2021 10:45 AM, Christoph Hellwig wrote:
> On Wed, Sep 22, 2021 at 10:44:20AM +0300, Max Gurtovoy wrote:
>> So for now, as mentioned, till we have some ib_ API, lets set it to 128.
> Please just add the proper ib_ API, it should not be a whole lot of
> work as we already do that calculation anyway for the R/W API setup.

We don't do this exact calculation since only the low level driver knows 
that number of WQEs we need for some sophisticated WR.

The API we need is like ib_get_qp_limits when one provides some input on 
the operations it will issue and will receive an output for it.

Then we need to divide it by some factor that will reflect the amount of 
max WRs per NVMe request (e.g mem_reg + mem_invalidation + rdma_op + 
pi_yes_no).

I spoke with Jason on that and we decided that it's not a trivial patch.

is it necessary for this submission or can we live with 128 depth for 
now ? with and without new ib_ API the queue depth will be in these sizes.

>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-09-22  8:11 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21 19:04 [PATCH v1 0/2] update RDMA controllers queue depth Max Gurtovoy
2021-09-21 19:04 ` [PATCH 1/2] nvmet: add get_queue_size op for controllers Max Gurtovoy
2021-09-21 19:20   ` Chaitanya Kulkarni
2021-09-21 22:47   ` Sagi Grimberg
2021-09-22  7:35     ` Max Gurtovoy
2021-09-21 19:04 ` [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op Max Gurtovoy
2021-09-21 19:21   ` Chaitanya Kulkarni
2021-09-21 22:52   ` Sagi Grimberg
2021-09-22  7:44     ` Max Gurtovoy
2021-09-22  7:45       ` Christoph Hellwig
2021-09-22  8:10         ` Max Gurtovoy [this message]
2021-09-22  9:18           ` Sagi Grimberg
2021-09-22  9:35             ` Max Gurtovoy
2021-09-22 12:10             ` Jason Gunthorpe
2021-09-22 12:57               ` Max Gurtovoy
2021-09-22 13:31                 ` Jason Gunthorpe
2021-09-22 14:00                   ` Max Gurtovoy
2021-09-21 19:22 ` [PATCH v1 0/2] update RDMA controllers queue depth Chaitanya Kulkarni
2021-09-21 19:42   ` Mark Ruijter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=01762668-1572-9f37-73e8-714d4ff23323@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=jgg@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mruijter@primelogic.nl \
    --cc=nitzanc@nvidia.com \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.