All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: [PATCH v3 1/6] svcrdma: Do not send XDR roundup bytes for a write chunk
Date: Mon, 07 Dec 2015 15:42:31 -0500	[thread overview]
Message-ID: <20151207204231.12988.59287.stgit@klimt.1015granger.net> (raw)
In-Reply-To: <20151207203851.12988.97804.stgit-Hs+gFlyCn65vLzlybtyyYzGyq/o6K9yX@public.gmane.org>

Minor optimization: when dealing with write chunk XDR roundup, do
not post a Write WR for the zero bytes in the pad. Simply update
the write segment in the RPC-over-RDMA header to reflect the extra
pad bytes.

The Reply chunk is also a write chunk, but the server does not use
send_write_chunks() to send the Reply chunk. That's OK in this case:
the server Upper Layer typically marshals the Reply chunk contents
in a single contiguous buffer, without a separate tail for the XDR
pad.

The comments and the variable naming refer to "chunks" but what is
really meant is "segments." The existing code sends only one
xdr_write_chunk per RPC reply.

The fix assumes this as well. When the XDR pad in the first write
chunk is reached, the assumption is the Write list is complete and
send_write_chunks() returns.

That will remain a valid assumption until the server Upper Layer can
support multiple bulk payload results per RPC.

Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---
 net/sunrpc/xprtrdma/svc_rdma_sendto.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 969a1ab..bad5eaa 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -342,6 +342,13 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
 						arg_ch->rs_handle,
 						arg_ch->rs_offset,
 						write_len);
+
+		/* Do not send XDR pad bytes */
+		if (chunk_no && write_len < 4) {
+			chunk_no++;
+			break;
+		}
+
 		chunk_off = 0;
 		while (write_len) {
 			ret = send_write(xprt, rqstp,

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Chuck Lever <chuck.lever@oracle.com>
To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org
Subject: [PATCH v3 1/6] svcrdma: Do not send XDR roundup bytes for a write chunk
Date: Mon, 07 Dec 2015 15:42:31 -0500	[thread overview]
Message-ID: <20151207204231.12988.59287.stgit@klimt.1015granger.net> (raw)
In-Reply-To: <20151207203851.12988.97804.stgit@klimt.1015granger.net>

Minor optimization: when dealing with write chunk XDR roundup, do
not post a Write WR for the zero bytes in the pad. Simply update
the write segment in the RPC-over-RDMA header to reflect the extra
pad bytes.

The Reply chunk is also a write chunk, but the server does not use
send_write_chunks() to send the Reply chunk. That's OK in this case:
the server Upper Layer typically marshals the Reply chunk contents
in a single contiguous buffer, without a separate tail for the XDR
pad.

The comments and the variable naming refer to "chunks" but what is
really meant is "segments." The existing code sends only one
xdr_write_chunk per RPC reply.

The fix assumes this as well. When the XDR pad in the first write
chunk is reached, the assumption is the Write list is complete and
send_write_chunks() returns.

That will remain a valid assumption until the server Upper Layer can
support multiple bulk payload results per RPC.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/svc_rdma_sendto.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 969a1ab..bad5eaa 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -342,6 +342,13 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
 						arg_ch->rs_handle,
 						arg_ch->rs_offset,
 						write_len);
+
+		/* Do not send XDR pad bytes */
+		if (chunk_no && write_len < 4) {
+			chunk_no++;
+			break;
+		}
+
 		chunk_off = 0;
 		while (write_len) {
 			ret = send_write(xprt, rqstp,


  parent reply	other threads:[~2015-12-07 20:42 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-07 20:42 [PATCH v3 0/6] NFS/RDMA server patches for 4.5 Chuck Lever
2015-12-07 20:42 ` Chuck Lever
     [not found] ` <20151207203851.12988.97804.stgit-Hs+gFlyCn65vLzlybtyyYzGyq/o6K9yX@public.gmane.org>
2015-12-07 20:42   ` Chuck Lever [this message]
2015-12-07 20:42     ` [PATCH v3 1/6] svcrdma: Do not send XDR roundup bytes for a write chunk Chuck Lever
     [not found]     ` <20151207204231.12988.59287.stgit-Hs+gFlyCn65vLzlybtyyYzGyq/o6K9yX@public.gmane.org>
2015-12-13  3:14       ` Tom Talpey
2015-12-13  3:14         ` Tom Talpey
     [not found]         ` <566CE288.6000808-CLs1Zie5N5HQT0dZR+AlfA@public.gmane.org>
2015-12-13 19:44           ` Chuck Lever
2015-12-13 19:44             ` Chuck Lever
2015-12-07 20:42   ` [PATCH v3 2/6] svcrdma: Improve allocation of struct svc_rdma_op_ctxt Chuck Lever
2015-12-07 20:42     ` Chuck Lever
2015-12-07 20:42   ` [PATCH v3 3/6] svcrdma: Define maximum number of backchannel requests Chuck Lever
2015-12-07 20:42     ` Chuck Lever
2015-12-07 20:42   ` [PATCH v3 4/6] svcrdma: Add infrastructure to send backwards direction RPC/RDMA calls Chuck Lever
2015-12-07 20:42     ` Chuck Lever
2015-12-07 20:43   ` [PATCH v3 5/6] svcrdma: Add infrastructure to receive backwards direction RPC/RDMA replies Chuck Lever
2015-12-07 20:43     ` Chuck Lever
     [not found]     ` <20151207204305.12988.2749.stgit-Hs+gFlyCn65vLzlybtyyYzGyq/o6K9yX@public.gmane.org>
2015-12-13  3:24       ` Tom Talpey
2015-12-13  3:24         ` Tom Talpey
     [not found]         ` <566CE4E7.9050307-CLs1Zie5N5HQT0dZR+AlfA@public.gmane.org>
2015-12-13 20:27           ` Chuck Lever
2015-12-13 20:27             ` Chuck Lever
2015-12-07 20:43   ` [PATCH v3 6/6] xprtrdma: Add class for RDMA backwards direction transport Chuck Lever
2015-12-07 20:43     ` Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151207204231.12988.59287.stgit@klimt.1015granger.net \
    --to=chuck.lever-qhclzuegtsvqt0dzr+alfa@public.gmane.org \
    --cc=linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.