All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] update RDMA controllers queue depth
@ 2021-09-22 21:55 Max Gurtovoy
  2021-09-22 21:55 ` [PATCH 1/3] nvme-rdma: limit the maximal queue size for RDMA controllers Max Gurtovoy
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Max Gurtovoy @ 2021-09-22 21:55 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi, chaitanyak
  Cc: israelr, mruijter, oren, nitzanc, jgg, Max Gurtovoy

Hi all,
This series is solving the issue that was introduced by Mark Ruijter
while testing SPDK initiators on Vmware-7.x while connecting to Linux
RDMA target running on NVIDIA's ConnectX-6 Mellanox Technologies
adapter. During connection establishment, the NVMf target controller
expose a 1024 queue depth capability but wasn't able to satisfy this
depth in reality. The reason for that is that the NVMf driver didn't
take the underlying HW capabilities into consideration. For now, limit
the RDMA queue depth to a value of 128 (that is the default and works
for all the RDMA controllers probably). For that, introduce a new
controller operation to return the possible queue size for a given HW.
Other transport will continue with their old behaviour.

Also fix the other side of the coin. In case there is a target that is
capable to expose a queue depth of 1024 (or any value > 128), limit the
initiator side to open only queues upto 128 entries to avoid a situation
of failing to allocate larger QP.

In the future, in order to increase this size, we'll need to create a
special RDMA API to calculate a possible queue depth for ULPs. As we
know, the RDMA IO operations sometimes are built from multiple WRs (such
as memory registrations and invalidations) that the ULP driver should
take this into consideration during device discovery and queue depth
calculations.

Changes from V1:
 - added Reviewed-by signatures (Chaitanya)
 - rename get_queue_size to get_max_queue_size (Sagi)
 - add a fix to initiator side as well (Jason)
 - move NVME_RDMA_MAX_QUEUE_SIZE to common header file for nvme-rdma

Max Gurtovoy (3):
  nvme-rdma: limit the maximal queue size for RDMA controllers
  nvmet: add get_max_queue_size op for controllers
  nvmet-rdma: implement get_max_queue_size controller op

 drivers/nvme/host/rdma.c    | 7 +++++++
 drivers/nvme/target/core.c  | 8 +++++---
 drivers/nvme/target/nvmet.h | 1 +
 drivers/nvme/target/rdma.c  | 6 ++++++
 include/linux/nvme-rdma.h   | 2 ++
 5 files changed, 21 insertions(+), 3 deletions(-)

-- 
2.18.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-10-12 13:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-22 21:55 [PATCH v2 0/3] update RDMA controllers queue depth Max Gurtovoy
2021-09-22 21:55 ` [PATCH 1/3] nvme-rdma: limit the maximal queue size for RDMA controllers Max Gurtovoy
2021-09-22 21:55 ` [PATCH 2/3] nvmet: add get_max_queue_size op for controllers Max Gurtovoy
2021-09-22 21:55 ` [PATCH 3/3] nvmet-rdma: implement get_max_queue_size controller op Max Gurtovoy
2021-10-12 13:04 ` [PATCH v2 0/3] update RDMA controllers queue depth Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.