All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <cel@kernel.org>
To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org
Subject: [PATCH v2 04/12] svcrdma: Increase the per-transport rw_ctx count
Date: Sun, 04 Feb 2024 18:16:56 -0500	[thread overview]
Message-ID: <170708861688.28128.16380294131274226696.stgit@bazille.1015granger.net> (raw)
In-Reply-To: <170708844422.28128.2979813721958631192.stgit@bazille.1015granger.net>

From: Chuck Lever <chuck.lever@oracle.com>

rdma_rw_mr_factor() returns the smallest number of MRs needed to
move a particular number of pages. svcrdma currently asks for the
number of MRs needed to move RPCSVC_MAXPAGES (a little over one
megabyte), as that is the number of pages in the largest r/wsize
the server supports.

This call assumes that the client's NIC can bundle a full one
megabyte payload in a single rdma_segment. In fact, most NICs cannot
handle a full megabyte with a single rkey / rdma_segment. Clients
will typically split even a single Read chunk into many segments.

The server needs one MR to read each rdma_segment in a Read chunk,
and thus each one needs an rw_ctx.

svcrdma has been vastly underestimating the number of rw_ctxs needed
to handle 64 RPC requests with large Read chunks using small
rdma_segments.

Unfortunately there doesn't seem to be a good way to estimate this
number without knowing the client NIC's capabilities. Even then,
the client RPC/RDMA implementation is still free to split a chunk
into smaller segments (for example, it might be using physical
registration, which needs an rdma_segment per page).

The best we can do for now is choose a number that will guarantee
forward progress in the worst case (one page per segment).

At some later point, we could add some mechanisms to make this
much less of a problem:
- Add a core API to add more rw_ctxs to an already-established QP
- svcrdma could treat rw_ctx exhaustion as a temporary error and
  try again
- Limit the number of Reads in flight

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/svc_rdma_transport.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 839c0e80e5cd..2b1c16b9547d 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -422,8 +422,13 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
 		newxprt->sc_max_requests = rq_depth - 2;
 		newxprt->sc_max_bc_requests = 2;
 	}
-	ctxts = rdma_rw_mr_factor(dev, newxprt->sc_port_num, RPCSVC_MAXPAGES);
-	ctxts *= newxprt->sc_max_requests;
+
+	/* Arbitrarily estimate the number of rw_ctxs needed for
+	 * this transport. This is enough rw_ctxs to make forward
+	 * progress even if the client is using one rkey per page
+	 * in each Read chunk.
+	 */
+	ctxts = 3 * RPCSVC_MAXPAGES;
 	newxprt->sc_sq_depth = rq_depth + ctxts;
 	if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr)
 		newxprt->sc_sq_depth = dev->attrs.max_qp_wr;



  parent reply	other threads:[~2024-02-04 23:16 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-04 23:16 [PATCH v2 00/12] NFSD RDMA transport improvements Chuck Lever
2024-02-04 23:16 ` [PATCH v2 01/12] svcrdma: Reserve an extra WQE for ib_drain_rq() Chuck Lever
2024-02-04 23:16 ` [PATCH v2 02/12] svcrdma: Report CQ depths in debugging output Chuck Lever
2024-02-04 23:16 ` [PATCH v2 03/12] svcrdma: Update max_send_sges after QP is created Chuck Lever
2024-02-04 23:16 ` Chuck Lever [this message]
2024-02-04 23:17 ` [PATCH v2 05/12] svcrdma: Fix SQ wake-ups Chuck Lever
2024-02-04 23:17 ` [PATCH v2 06/12] svcrdma: Prevent a UAF in svc_rdma_send() Chuck Lever
2024-02-04 23:17 ` [PATCH v2 07/12] svcrdma: Fix retry loop " Chuck Lever
2024-02-04 23:17 ` [PATCH v2 08/12] svcrdma: Post Send WR chain Chuck Lever
2024-02-04 23:17 ` [PATCH v2 09/12] svcrdma: Move write_info for Reply chunks into struct svc_rdma_send_ctxt Chuck Lever
2024-02-04 23:17 ` [PATCH v2 10/12] svcrdma: Post the Reply chunk and Send WR together Chuck Lever
2024-02-04 23:17 ` [PATCH v2 11/12] svcrdma: Post WRs for Write chunks in svc_rdma_sendto() Chuck Lever
2024-02-04 23:17 ` [PATCH v2 12/12] svcrdma: Add Write chunk WRs to the RPC's Send WR chain Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=170708861688.28128.16380294131274226696.stgit@bazille.1015granger.net \
    --to=cel@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.