All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Max Gurtovoy <mgurtovoy@nvidia.com>, Christoph Hellwig <hch@lst.de>
Cc: linux-nvme@lists.infradead.org, kbusch@kernel.org,
	chaitanyak@nvidia.com,  israelr@nvidia.com,
	mruijter@primelogic.nl, oren@nvidia.com, nitzanc@nvidia.com,
	Jason Gunthorpe <jgg@nvidia.com>
Subject: Re: [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op
Date: Wed, 22 Sep 2021 12:18:15 +0300	[thread overview]
Message-ID: <6df58d37-0519-9e97-bae5-a529084c8341@grimberg.me> (raw)
In-Reply-To: <01762668-1572-9f37-73e8-714d4ff23323@nvidia.com>


>>> So for now, as mentioned, till we have some ib_ API, lets set it to 128.
>> Please just add the proper ib_ API, it should not be a whole lot of
>> work as we already do that calculation anyway for the R/W API setup.
> 
> We don't do this exact calculation since only the low level driver knows 
> that number of WQEs we need for some sophisticated WR.
> 
> The API we need is like ib_get_qp_limits when one provides some input on 
> the operations it will issue and will receive an output for it.
> 
> Then we need to divide it by some factor that will reflect the amount of 
> max WRs per NVMe request (e.g mem_reg + mem_invalidation + rdma_op + 
> pi_yes_no).
> 
> I spoke with Jason on that and we decided that it's not a trivial patch.

Can't you do this in rdmw_rw? all of the users of it will need the
exact same value right?

> is it necessary for this submission or can we live with 128 depth for 
> now ? with and without new ib_ API the queue depth will be in these sizes.

I am not sure I see the entire complexity. Even if this calc is not
accurate, you are already proposing to hard-code it to 128, so you
can do this to account for the boundaries there.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-09-22  9:18 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21 19:04 [PATCH v1 0/2] update RDMA controllers queue depth Max Gurtovoy
2021-09-21 19:04 ` [PATCH 1/2] nvmet: add get_queue_size op for controllers Max Gurtovoy
2021-09-21 19:20   ` Chaitanya Kulkarni
2021-09-21 22:47   ` Sagi Grimberg
2021-09-22  7:35     ` Max Gurtovoy
2021-09-21 19:04 ` [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op Max Gurtovoy
2021-09-21 19:21   ` Chaitanya Kulkarni
2021-09-21 22:52   ` Sagi Grimberg
2021-09-22  7:44     ` Max Gurtovoy
2021-09-22  7:45       ` Christoph Hellwig
2021-09-22  8:10         ` Max Gurtovoy
2021-09-22  9:18           ` Sagi Grimberg [this message]
2021-09-22  9:35             ` Max Gurtovoy
2021-09-22 12:10             ` Jason Gunthorpe
2021-09-22 12:57               ` Max Gurtovoy
2021-09-22 13:31                 ` Jason Gunthorpe
2021-09-22 14:00                   ` Max Gurtovoy
2021-09-21 19:22 ` [PATCH v1 0/2] update RDMA controllers queue depth Chaitanya Kulkarni
2021-09-21 19:42   ` Mark Ruijter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6df58d37-0519-9e97-bae5-a529084c8341@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=jgg@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mgurtovoy@nvidia.com \
    --cc=mruijter@primelogic.nl \
    --cc=nitzanc@nvidia.com \
    --cc=oren@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.