All of lore.kernel.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>, Christoph Hellwig <hch@lst.de>
Cc: <linux-nvme@lists.infradead.org>, <kbusch@kernel.org>,
	<chaitanyak@nvidia.com>, <israelr@nvidia.com>,
	<mruijter@primelogic.nl>, <oren@nvidia.com>, <nitzanc@nvidia.com>,
	Jason Gunthorpe <jgg@nvidia.com>
Subject: Re: [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op
Date: Wed, 22 Sep 2021 12:35:45 +0300	[thread overview]
Message-ID: <67a2a26d-beca-5d6d-e58a-430cfc97020a@nvidia.com> (raw)
In-Reply-To: <6df58d37-0519-9e97-bae5-a529084c8341@grimberg.me>


On 9/22/2021 12:18 PM, Sagi Grimberg wrote:
>
>>>> So for now, as mentioned, till we have some ib_ API, lets set it to 
>>>> 128.
>>> Please just add the proper ib_ API, it should not be a whole lot of
>>> work as we already do that calculation anyway for the R/W API setup.
>>
>> We don't do this exact calculation since only the low level driver 
>> knows that number of WQEs we need for some sophisticated WR.
>>
>> The API we need is like ib_get_qp_limits when one provides some input 
>> on the operations it will issue and will receive an output for it.
>>
>> Then we need to divide it by some factor that will reflect the amount 
>> of max WRs per NVMe request (e.g mem_reg + mem_invalidation + rdma_op 
>> + pi_yes_no).
>>
>> I spoke with Jason on that and we decided that it's not a trivial patch.
>
> Can't you do this in rdmw_rw? all of the users of it will need the
> exact same value right?

The factor of the operations per IO req is to be added to RW API.

The factor of WR to WQE is in low level driver and is ib_ API.


>
>> is it necessary for this submission or can we live with 128 depth for 
>> now ? with and without new ib_ API the queue depth will be in these 
>> sizes.
>
> I am not sure I see the entire complexity. Even if this calc is not
> accurate, you are already proposing to hard-code it to 128, so you
> can do this to account for the boundaries there.

How does the ULP know the BBs per max WR operation ?

I prepared a patch to solve the case we say we support X but we actually 
support less than X.

128 Value is supported by mlx device and I assume by other RDMA devices 
as well since its the default value for the initiator.

The full solution include changes in RDMA_RW, ib_, low level drivers to 
implement ib_ API.

I wanted to divide it to early solution (this series) and full solution 
(the above).



_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-09-22  9:36 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21 19:04 [PATCH v1 0/2] update RDMA controllers queue depth Max Gurtovoy
2021-09-21 19:04 ` [PATCH 1/2] nvmet: add get_queue_size op for controllers Max Gurtovoy
2021-09-21 19:20   ` Chaitanya Kulkarni
2021-09-21 22:47   ` Sagi Grimberg
2021-09-22  7:35     ` Max Gurtovoy
2021-09-21 19:04 ` [PATCH 2/2] nvmet-rdma: implement get_queue_size controller op Max Gurtovoy
2021-09-21 19:21   ` Chaitanya Kulkarni
2021-09-21 22:52   ` Sagi Grimberg
2021-09-22  7:44     ` Max Gurtovoy
2021-09-22  7:45       ` Christoph Hellwig
2021-09-22  8:10         ` Max Gurtovoy
2021-09-22  9:18           ` Sagi Grimberg
2021-09-22  9:35             ` Max Gurtovoy [this message]
2021-09-22 12:10             ` Jason Gunthorpe
2021-09-22 12:57               ` Max Gurtovoy
2021-09-22 13:31                 ` Jason Gunthorpe
2021-09-22 14:00                   ` Max Gurtovoy
2021-09-21 19:22 ` [PATCH v1 0/2] update RDMA controllers queue depth Chaitanya Kulkarni
2021-09-21 19:42   ` Mark Ruijter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=67a2a26d-beca-5d6d-e58a-430cfc97020a@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=jgg@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mruijter@primelogic.nl \
    --cc=nitzanc@nvidia.com \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.