All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
To: anna.schumaker-HgOvQuBEEgTQT0dZR+AlfA@public.gmane.org
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: [PATCH v3 05/22] SUNRPC: Separate buffer pointers for RPC Call and Reply messages
Date: Thu, 15 Sep 2016 10:55:37 -0400	[thread overview]
Message-ID: <20160915145537.11080.82121.stgit@manet.1015granger.net> (raw)
In-Reply-To: <20160915143440.11080.89241.stgit-FYjufvaPoItvLzlybtyyYzGyq/o6K9yX@public.gmane.org>

For xprtrdma, the RPC Call and Reply buffers are involved in real
I/O operations.

To start with, the DMA direction of the I/O for a Call is opposite
that of a Reply.

In the current arrangement, the Reply buffer address is on a
four-byte alignment just past the call buffer. Would be friendlier
on some platforms if that was at a DMA cache alignment instead.

Because the current arrangement allocates a single memory region
which contains both buffers, the RPC Reply buffer often contains a
page boundary in it when the Call buffer is large enough (which is
frequent).

It would be a little nicer for setting up DMA operations (and
possible registration of the Reply buffer) if the two buffers were
separated, well-aligned, and contained as few page boundaries as
possible.

Now, I could just pad out the single memory region used for the pair
of buffers. But frequently that would mean a lot of unused space to
ensure the Reply buffer did not have a page boundary.

Add a separate pointer to rpc_rqst that points right to the RPC
Reply buffer. This makes no difference to xprtsock, but it will help
xprtrdma in subsequent patches.

Signed-off-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
---
 include/linux/sunrpc/xprt.h     |    5 +++--
 net/sunrpc/clnt.c               |    2 +-
 net/sunrpc/sched.c              |    1 +
 net/sunrpc/xprtrdma/transport.c |    1 +
 4 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 72c2aeb..46f069e 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -84,8 +84,9 @@ struct rpc_rqst {
 	struct list_head	rq_list;
 
 	void			*rq_buffer;	/* Call XDR encode buffer */
-	size_t			rq_callsize,
-				rq_rcvsize;
+	size_t			rq_callsize;
+	void			*rq_rbuffer;	/* Reply XDR decode buffer */
+	size_t			rq_rcvsize;
 	size_t			rq_xmit_bytes_sent;	/* total bytes sent */
 	size_t			rq_reply_bytes_recvd;	/* total reply bytes */
 							/* received */
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 4dc7ed7..07d0faa 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -1768,7 +1768,7 @@ rpc_xdr_encode(struct rpc_task *task)
 		     req->rq_buffer,
 		     req->rq_callsize);
 	xdr_buf_init(&req->rq_rcv_buf,
-		     (char *)req->rq_buffer + req->rq_callsize,
+		     req->rq_rbuffer,
 		     req->rq_rcvsize);
 
 	p = rpc_encode_header(task);
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 6690ebc..5db68b3 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -891,6 +891,7 @@ int rpc_malloc(struct rpc_task *task)
 	dprintk("RPC: %5u allocated buffer of size %zu at %p\n",
 			task->tk_pid, size, buf);
 	rqst->rq_buffer = buf->data;
+	rqst->rq_rbuffer = (char *)rqst->rq_buffer + rqst->rq_callsize;
 	return 0;
 }
 EXPORT_SYMBOL_GPL(rpc_malloc);
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index ebf14ba..136caf3 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -524,6 +524,7 @@ out:
 	dprintk("RPC:       %s: size %zd, request 0x%p\n", __func__, size, req);
 	req->rl_connect_cookie = 0;	/* our reserved value */
 	rqst->rq_buffer = req->rl_sendbuf->rg_base;
+	rqst->rq_rbuffer = (char *)rqst->rq_buffer + rqst->rq_rcvsize;
 	return 0;
 
 out_rdmabuf:

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Chuck Lever <chuck.lever@oracle.com>
To: anna.schumaker@netapp.com
Cc: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org
Subject: [PATCH v3 05/22] SUNRPC: Separate buffer pointers for RPC Call and Reply messages
Date: Thu, 15 Sep 2016 10:55:37 -0400	[thread overview]
Message-ID: <20160915145537.11080.82121.stgit@manet.1015granger.net> (raw)
In-Reply-To: <20160915143440.11080.89241.stgit@manet.1015granger.net>

For xprtrdma, the RPC Call and Reply buffers are involved in real
I/O operations.

To start with, the DMA direction of the I/O for a Call is opposite
that of a Reply.

In the current arrangement, the Reply buffer address is on a
four-byte alignment just past the call buffer. Would be friendlier
on some platforms if that was at a DMA cache alignment instead.

Because the current arrangement allocates a single memory region
which contains both buffers, the RPC Reply buffer often contains a
page boundary in it when the Call buffer is large enough (which is
frequent).

It would be a little nicer for setting up DMA operations (and
possible registration of the Reply buffer) if the two buffers were
separated, well-aligned, and contained as few page boundaries as
possible.

Now, I could just pad out the single memory region used for the pair
of buffers. But frequently that would mean a lot of unused space to
ensure the Reply buffer did not have a page boundary.

Add a separate pointer to rpc_rqst that points right to the RPC
Reply buffer. This makes no difference to xprtsock, but it will help
xprtrdma in subsequent patches.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/linux/sunrpc/xprt.h     |    5 +++--
 net/sunrpc/clnt.c               |    2 +-
 net/sunrpc/sched.c              |    1 +
 net/sunrpc/xprtrdma/transport.c |    1 +
 4 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 72c2aeb..46f069e 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -84,8 +84,9 @@ struct rpc_rqst {
 	struct list_head	rq_list;
 
 	void			*rq_buffer;	/* Call XDR encode buffer */
-	size_t			rq_callsize,
-				rq_rcvsize;
+	size_t			rq_callsize;
+	void			*rq_rbuffer;	/* Reply XDR decode buffer */
+	size_t			rq_rcvsize;
 	size_t			rq_xmit_bytes_sent;	/* total bytes sent */
 	size_t			rq_reply_bytes_recvd;	/* total reply bytes */
 							/* received */
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 4dc7ed7..07d0faa 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -1768,7 +1768,7 @@ rpc_xdr_encode(struct rpc_task *task)
 		     req->rq_buffer,
 		     req->rq_callsize);
 	xdr_buf_init(&req->rq_rcv_buf,
-		     (char *)req->rq_buffer + req->rq_callsize,
+		     req->rq_rbuffer,
 		     req->rq_rcvsize);
 
 	p = rpc_encode_header(task);
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 6690ebc..5db68b3 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -891,6 +891,7 @@ int rpc_malloc(struct rpc_task *task)
 	dprintk("RPC: %5u allocated buffer of size %zu at %p\n",
 			task->tk_pid, size, buf);
 	rqst->rq_buffer = buf->data;
+	rqst->rq_rbuffer = (char *)rqst->rq_buffer + rqst->rq_callsize;
 	return 0;
 }
 EXPORT_SYMBOL_GPL(rpc_malloc);
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index ebf14ba..136caf3 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -524,6 +524,7 @@ out:
 	dprintk("RPC:       %s: size %zd, request 0x%p\n", __func__, size, req);
 	req->rl_connect_cookie = 0;	/* our reserved value */
 	rqst->rq_buffer = req->rl_sendbuf->rg_base;
+	rqst->rq_rbuffer = (char *)rqst->rq_buffer + rqst->rq_rcvsize;
 	return 0;
 
 out_rdmabuf:


  parent reply	other threads:[~2016-09-15 14:55 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-15 14:54 [PATCH v3 00/22] client-side NFS/RDMA patches ready for v4.9 Chuck Lever
2016-09-15 14:54 ` Chuck Lever
     [not found] ` <20160915143440.11080.89241.stgit-FYjufvaPoItvLzlybtyyYzGyq/o6K9yX@public.gmane.org>
2016-09-15 14:55   ` [PATCH v3 01/22] xprtrdma: Eliminate INLINE_THRESHOLD macros Chuck Lever
2016-09-15 14:55     ` Chuck Lever
2016-09-15 14:55   ` [PATCH v3 02/22] SUNRPC: Refactor rpc_xdr_buf_init() Chuck Lever
2016-09-15 14:55     ` Chuck Lever
2016-09-15 14:55   ` [PATCH v3 03/22] SUNRPC: Generalize the RPC buffer allocation API Chuck Lever
2016-09-15 14:55     ` Chuck Lever
2016-09-15 14:55   ` [PATCH v3 04/22] SUNRPC: Generalize the RPC buffer release API Chuck Lever
2016-09-15 14:55     ` Chuck Lever
2016-09-15 14:55   ` Chuck Lever [this message]
2016-09-15 14:55     ` [PATCH v3 05/22] SUNRPC: Separate buffer pointers for RPC Call and Reply messages Chuck Lever
2016-09-15 14:55   ` [PATCH v3 06/22] SUNRPC: Add a transport-specific private field in rpc_rqst Chuck Lever
2016-09-15 14:55     ` Chuck Lever
2016-09-15 14:55   ` [PATCH v3 07/22] xprtrdma: Initialize separate RPC call and reply buffers Chuck Lever
2016-09-15 14:55     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 08/22] xprtrdma: Use smaller buffers for RPC-over-RDMA headers Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 09/22] xprtrdma: Replace DMA_BIDIRECTIONAL Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 10/22] xprtrdma: Delay DMA mapping Send and Receive buffers Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 11/22] xprtrdma: Eliminate "ia" argument in rpcrdma_{alloc, free}_regbuf Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 12/22] xprtrdma: Simplify rpcrdma_ep_post_recv() Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 13/22] xprtrdma: Move send_wr to struct rpcrdma_req Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 14/22] xprtrdma: Move recv_wr to struct rpcrdma_rep Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:56   ` [PATCH v3 15/22] rpcrdma: RDMA/CM private message data structure Chuck Lever
2016-09-15 14:56     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 16/22] xprtrdma: Client-side support for rpcrdma_connect_private Chuck Lever
2016-09-15 14:57     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 17/22] xprtrdma: Basic support for Remote Invalidation Chuck Lever
2016-09-15 14:57     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 18/22] xprtrdma: Use gathered Send for large inline messages Chuck Lever
2016-09-15 14:57     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 19/22] xprtrdma: Support larger inline thresholds Chuck Lever
2016-09-15 14:57     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 20/22] xprtrmda: Report address of frmr, not mw Chuck Lever
2016-09-15 14:57     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 21/22] xprtrdma: Rename rpcrdma_receive_wc() Chuck Lever
2016-09-15 14:57     ` Chuck Lever
2016-09-15 14:57   ` [PATCH v3 22/22] xprtrdma: Eliminate rpcrdma_receive_worker() Chuck Lever
2016-09-15 14:57     ` Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160915145537.11080.82121.stgit@manet.1015granger.net \
    --to=chuck.lever-qhclzuegtsvqt0dzr+alfa@public.gmane.org \
    --cc=anna.schumaker-HgOvQuBEEgTQT0dZR+AlfA@public.gmane.org \
    --cc=linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.