All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: David Howells <dhowells@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Christoph Hellwig <hch@infradead.org>,
	Jens Axboe <axboe@kernel.dk>, Jeff Layton <jlayton@kernel.org>,
	Christian Brauner <brauner@kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"open list:NETWORKING [GENERAL]" <netdev@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	Trond Myklebust <trond.myklebust@hammerspace.com>,
	Anna Schumaker <anna@kernel.org>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [RFC PATCH v2 40/48] sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage
Date: Thu, 30 Mar 2023 13:27:30 +0000	[thread overview]
Message-ID: <3A132FA8-A764-416E-9753-08E368D6877A@oracle.com> (raw)
In-Reply-To: <812755.1680182190@warthog.procyon.org.uk>



> On Mar 30, 2023, at 9:16 AM, David Howells <dhowells@redhat.com> wrote:
> 
> David Howells <dhowells@redhat.com> wrote:
> 
>> Chuck Lever III <chuck.lever@oracle.com> wrote:
>> 
>>> Simply replacing the kernel_sendpage() loop would be a
>>> straightforward change and easy to evaluate and test, and
>>> I'd welcome that without hesitation.
>> 
>> How about the attached for a first phase?
>> 
>> It does three sendmsgs, one for the marker + header, one for the body and one
>> for the tail.
> 
> ... And this as a second phase.
> 
> David
> ---
> sunrpc: Allow xdr->bvec[] to be extended to do a single sendmsg
> 
> Allow xdr->bvec[] to be extended and insert the marker, the header and the
> tail into it so that a single sendmsg() can be used to transmit the message.

Don't. Just change svc_tcp_send_kvec() to use sock_sendmsg, and
leave the marker alone for now, please.

Let's focus on replacing kernel_sendpage() in this series and
leave the deeper clean-ups for another time.


> I wonder if it would be possible to insert the marker at the beginning of the
> head buffer.

That's the way it used to work. The reason we don't do that is
because each transport has its own record marking mechanism.

UDP has nothing, since each RPC message is encapsulated in a
single datagram. RDMA has a full XDR-encoded header which
contains the location of data chunks to be moved via RDMA.


> Signed-off-by: David Howells <dhowells@redhat.com>
> cc: Trond Myklebust <trond.myklebust@hammerspace.com>
> cc: Anna Schumaker <anna@kernel.org>
> cc: Chuck Lever <chuck.lever@oracle.com>
> cc: Jeff Layton <jlayton@kernel.org>
> cc: "David S. Miller" <davem@davemloft.net>
> cc: Eric Dumazet <edumazet@google.com>
> cc: Jakub Kicinski <kuba@kernel.org>
> cc: Paolo Abeni <pabeni@redhat.com>
> cc: Jens Axboe <axboe@kernel.dk>
> cc: Matthew Wilcox <willy@infradead.org>
> cc: linux-nfs@vger.kernel.org
> cc: netdev@vger.kernel.org
> ---
> include/linux/sunrpc/xdr.h |    2 -
> net/sunrpc/svcsock.c       |   46 ++++++++++++++-------------------------------
> net/sunrpc/xdr.c           |   19 ++++++++++--------
> net/sunrpc/xprtsock.c      |    6 ++---
> 4 files changed, 30 insertions(+), 43 deletions(-)
> 
> diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
> index 72014c9216fc..c74ea483228b 100644
> --- a/include/linux/sunrpc/xdr.h
> +++ b/include/linux/sunrpc/xdr.h
> @@ -137,7 +137,7 @@ void	xdr_inline_pages(struct xdr_buf *, unsigned int,
> 			 struct page **, unsigned int, unsigned int);
> void	xdr_terminate_string(const struct xdr_buf *, const u32);
> size_t	xdr_buf_pagecount(const struct xdr_buf *buf);
> -int	xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp);
> +int	xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp, unsigned int head, unsigned int tail);
> void	xdr_free_bvec(struct xdr_buf *buf);
> 
> static inline __be32 *xdr_encode_array(__be32 *p, const void *s, unsigned int len)
> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> index 14efcc08c6f8..e55761fe1ccf 100644
> --- a/net/sunrpc/svcsock.c
> +++ b/net/sunrpc/svcsock.c
> @@ -569,7 +569,7 @@ static int svc_udp_sendto(struct svc_rqst *rqstp)
> 	if (svc_xprt_is_dead(xprt))
> 		goto out_notconn;
> 
> -	err = xdr_alloc_bvec(xdr, GFP_KERNEL);
> +	err = xdr_alloc_bvec(xdr, GFP_KERNEL, 0, 0);
> 	if (err < 0)
> 		goto out_unlock;
> 
> @@ -1073,45 +1073,29 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
> {
> 	const struct kvec *head = xdr->head;
> 	const struct kvec *tail = xdr->tail;
> -	struct kvec kv[2];
> -	struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | MSG_MORE, };
> -	size_t sent;
> +	struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES, };
> +	size_t n;
> 	int ret;
> 
> 	*sentp = 0;
> -	ret = xdr_alloc_bvec(xdr, GFP_KERNEL);
> +	ret = xdr_alloc_bvec(xdr, GFP_KERNEL, 2, 1);
> 	if (ret < 0)
> 		return ret;
> 
> -	kv[0].iov_base = &marker;
> -	kv[0].iov_len = sizeof(marker);
> -	kv[1] = *head;
> -	iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, kv, 2, sizeof(marker) + head->iov_len);
> +	n = 2 + xdr_buf_pagecount(xdr);
> +	bvec_set_virt(&xdr->bvec[0], &marker, sizeof(marker));
> +	bvec_set_virt(&xdr->bvec[1], head->iov_base, head->iov_len);
> +	bvec_set_virt(&xdr->bvec[n], tail->iov_base, tail->iov_len);
> +	if (tail->iov_len)
> +		n++;
> +	iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, n,
> +		      sizeof(marker) + xdr->len);
> 	ret = sock_sendmsg(sock, &msg);
> 	if (ret < 0)
> 		return ret;
> -	sent = ret;
> -
> -	if (!tail->iov_len)
> -		msg.msg_flags &= ~MSG_MORE;
> -	iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec,
> -		      xdr_buf_pagecount(xdr), xdr->page_len);
> -	ret = sock_sendmsg(sock, &msg);
> -	if (ret < 0)
> -		return ret;
> -	sent += ret;
> -
> -	if (tail->iov_len) {
> -		msg.msg_flags &= ~MSG_MORE;
> -		iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, tail, 1, tail->iov_len);
> -		ret = sock_sendmsg(sock, &msg);
> -		if (ret < 0)
> -			return ret;
> -		sent += ret;
> -	}
> -	if (sent > 0)
> -		*sentp = sent;
> -	if (sent != sizeof(marker) + xdr->len)
> +	if (ret > 0)
> +		*sentp = ret;
> +	if (ret != sizeof(marker) + xdr->len)
> 		return -EAGAIN;
> 	return 0;
> }
> diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
> index 36835b2f5446..695821963849 100644
> --- a/net/sunrpc/xdr.c
> +++ b/net/sunrpc/xdr.c
> @@ -141,18 +141,21 @@ size_t xdr_buf_pagecount(const struct xdr_buf *buf)
> }
> 
> int
> -xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp)
> +xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp, unsigned int head, unsigned int tail)
> {
> -	size_t i, n = xdr_buf_pagecount(buf);
> +	size_t i, j = 0, n = xdr_buf_pagecount(buf);
> 
> -	if (n != 0 && buf->bvec == NULL) {
> -		buf->bvec = kmalloc_array(n, sizeof(buf->bvec[0]), gfp);
> +	if (head + n + tail != 0 && buf->bvec == NULL) {
> +		buf->bvec = kmalloc_array(head + n + tail,
> +					  sizeof(buf->bvec[0]), gfp);
> 		if (!buf->bvec)
> 			return -ENOMEM;
> -		for (i = 0; i < n; i++) {
> -			bvec_set_page(&buf->bvec[i], buf->pages[i], PAGE_SIZE,
> -				      0);
> -		}
> +		for (i = 0; i < head; i++)
> +			bvec_set_page(&buf->bvec[j++], NULL, 0, 0);
> +		for (i = 0; i < n; i++)
> +			bvec_set_page(&buf->bvec[j++], buf->pages[i], PAGE_SIZE, 0);
> +		for (i = 0; i < tail; i++)
> +			bvec_set_page(&buf->bvec[j++], NULL, 0, 0);
> 	}
> 	return 0;
> }
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index adcbedc244d6..fdf67e84b1c7 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -825,7 +825,7 @@ static int xs_stream_nospace(struct rpc_rqst *req, bool vm_wait)
> 
> static int xs_stream_prepare_request(struct rpc_rqst *req, struct xdr_buf *buf)
> {
> -	return xdr_alloc_bvec(buf, rpc_task_gfp_mask());
> +	return xdr_alloc_bvec(buf, rpc_task_gfp_mask(), 0, 0);
> }
> 
> /*
> @@ -954,7 +954,7 @@ static int xs_udp_send_request(struct rpc_rqst *req)
> 	if (!xprt_request_get_cong(xprt, req))
> 		return -EBADSLT;
> 
> -	status = xdr_alloc_bvec(xdr, rpc_task_gfp_mask());
> +	status = xdr_alloc_bvec(xdr, rpc_task_gfp_mask(), 0, 0);
> 	if (status < 0)
> 		return status;
> 	req->rq_xtime = ktime_get();
> @@ -2591,7 +2591,7 @@ static int bc_sendto(struct rpc_rqst *req)
> 	int err;
> 
> 	req->rq_xtime = ktime_get();
> -	err = xdr_alloc_bvec(xdr, rpc_task_gfp_mask());
> +	err = xdr_alloc_bvec(xdr, rpc_task_gfp_mask(), 0, 0);
> 	if (err < 0)
> 		return err;
> 	err = xprt_sock_sendmsg(transport->sock, &msg, xdr, 0, marker, &sent);
> 

--
Chuck Lever



  reply	other threads:[~2023-03-30 13:28 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-29 14:13 [RFC PATCH v2 00/48] splice, net: Replace sendpage with sendmsg(MSG_SPLICE_PAGES) David Howells
2023-03-29 14:13 ` [RFC PATCH v2 01/48] netfs: Fix netfs_extract_iter_to_sg() for ITER_UBUF/IOVEC David Howells
2023-03-29 14:13 ` [RFC PATCH v2 02/48] iov_iter: Remove last_offset member David Howells
2023-03-29 14:13 ` [RFC PATCH v2 03/48] iov_iter: Add an iterator-of-iterators David Howells
2023-03-29 14:13 ` [RFC PATCH v2 04/48] net: Declare MSG_SPLICE_PAGES internal sendmsg() flag David Howells
2023-03-30 14:28   ` Willem de Bruijn
2023-03-30 15:07   ` David Howells
2023-03-30 17:51     ` Willem de Bruijn
2023-03-29 14:13 ` [RFC PATCH v2 05/48] mm: Move the page fragment allocator from page_alloc.c into its own file David Howells
2023-03-29 14:13 ` [RFC PATCH v2 06/48] mm: Make the page_frag_cache allocator use multipage folios David Howells
2023-03-29 14:13 ` [RFC PATCH v2 07/48] mm: Make the page_frag_cache allocator use per-cpu David Howells
2023-03-29 14:13 ` [RFC PATCH v2 08/48] tcp: Support MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 09/48] tcp: Make sendmsg(MSG_SPLICE_PAGES) copy unspliceable data David Howells
2023-03-29 14:13 ` [RFC PATCH v2 10/48] tcp: Convert do_tcp_sendpages() to use MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 11/48] tcp_bpf: Inline do_tcp_sendpages as it's now a wrapper around tcp_sendmsg David Howells
2023-03-29 14:13 ` [RFC PATCH v2 12/48] espintcp: Inline do_tcp_sendpages() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 13/48] tls: " David Howells
2023-03-29 14:13 ` [RFC PATCH v2 14/48] siw: " David Howells
2023-03-29 14:13 ` [RFC PATCH v2 15/48] tcp: Fold do_tcp_sendpages() into tcp_sendpage_locked() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 16/48] ip, udp: Support MSG_SPLICE_PAGES David Howells
2023-03-30 14:20   ` Willem de Bruijn
2023-03-30 14:39   ` David Howells
2023-03-30 17:46     ` Willem de Bruijn
2023-03-30 15:11   ` David Howells
2023-03-30 17:55     ` Willem de Bruijn
2023-03-30 19:49     ` David Howells
2023-03-29 14:13 ` [RFC PATCH v2 17/48] ip, udp: Make sendmsg(MSG_SPLICE_PAGES) copy unspliceable data David Howells
2023-03-29 14:13 ` [RFC PATCH v2 18/48] udp: Convert udp_sendpage() to use MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 19/48] af_unix: Support MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 20/48] af_unix: Make sendmsg(MSG_SPLICE_PAGES) copy unspliceable data David Howells
2023-03-29 14:13 ` [RFC PATCH v2 21/48] crypto: af_alg: Pin pages rather than ref'ing if appropriate David Howells
2023-03-29 14:13 ` [RFC PATCH v2 22/48] crypto: af_alg: Use netfs_extract_iter_to_sg() to create scatterlists David Howells
2023-03-29 14:13 ` [RFC PATCH v2 23/48] crypto: af_alg: Indent the loop in af_alg_sendmsg() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 24/48] crypto: af_alg: Support MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 25/48] crypto: af_alg: Convert af_alg_sendpage() to use MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 26/48] crypto: af_alg/hash: Support MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [RFC PATCH v2 27/48] splice, net: Use sendmsg(MSG_SPLICE_PAGES) rather than ->sendpage() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 28/48] splice: Reimplement splice_to_socket() to pass multiple bufs to sendmsg() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 29/48] Remove file->f_op->sendpage David Howells
2023-03-29 14:13 ` [RFC PATCH v2 30/48] siw: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage to transmit David Howells
2023-03-29 15:18   ` Bernard Metzler
2023-03-29 15:32   ` David Howells
2023-03-29 14:13 ` [RFC PATCH v2 31/48] ceph: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage David Howells
2023-03-29 14:13 ` [RFC PATCH v2 32/48] iscsi: " David Howells
2023-03-29 14:13 ` [RFC PATCH v2 33/48] iscsi: Assume "sendpage" is okay in iscsi_tcp_segment_map() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 34/48] tcp_bpf: Make tcp_bpf_sendpage() go through tcp_bpf_sendmsg(MSG_SPLICE_PAGES) David Howells
2023-03-29 14:13 ` [RFC PATCH v2 35/48] net: Use sendmsg(MSG_SPLICE_PAGES) not sendpage in skb_send_sock() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 36/48] algif: Remove hash_sendpage*() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 37/48] ceph: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage() David Howells
2023-03-30  1:45   ` Xiubo Li
2023-03-30  6:48   ` David Howells
2023-03-31 13:05     ` Xiubo Li
2023-04-03  3:27     ` Xiubo Li
2023-04-03  8:32     ` David Howells
2023-04-10  0:38       ` Xiubo Li
2023-03-29 14:13 ` [RFC PATCH v2 38/48] rds: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage David Howells
2023-03-29 14:13 ` [RFC PATCH v2 39/48] dlm: " David Howells
2023-03-29 14:13   ` [Cluster-devel] " David Howells
2023-03-29 14:13 ` [RFC PATCH v2 40/48] sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage David Howells
2023-03-29 15:28   ` Chuck Lever III
2023-03-29 19:58   ` David Howells
2023-03-30  9:29   ` David Howells
2023-03-30  9:41   ` David Howells
2023-03-30 13:16     ` Chuck Lever III
2023-03-30 13:01   ` David Howells
2023-03-30 13:16   ` David Howells
2023-03-30 13:27     ` Chuck Lever III [this message]
2023-03-30 14:26     ` David Howells
2023-03-30 16:36       ` Chuck Lever III
2023-04-14 14:41         ` Daire Byrne
2023-03-29 14:13 ` [RFC PATCH v2 41/48] sunrpc: Rely on TCP sendmsg + MSG_SPLICE_PAGES to copy unspliceable data David Howells
2023-03-29 14:13 ` [RFC PATCH v2 42/48] nvme: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage David Howells
2023-03-29 14:13 ` [RFC PATCH v2 43/48] kcm: " David Howells
2023-03-29 14:13 ` [RFC PATCH v2 44/48] smc: Drop smc_sendpage() in favour of smc_sendmsg() + MSG_SPLICE_PAGES David Howells
2023-03-29 14:13 ` [Ocfs2-devel] [RFC PATCH v2 45/48] ocfs2: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage() David Howells via Ocfs2-devel
2023-03-29 14:13   ` David Howells
2023-03-29 14:13 ` [RFC PATCH v2 46/48] drbd: Use sendmsg(MSG_SPLICE_PAGES) rather than sendmsg() David Howells
2023-03-29 14:13 ` [RFC PATCH v2 47/48] drdb: Send an entire bio in a single sendmsg David Howells
2023-03-29 14:13 ` [RFC PATCH v2 48/48] sock: Remove ->sendpage*() in favour of sendmsg(MSG_SPLICE_PAGES) David Howells
2023-03-29 14:13   ` David Howells
2023-03-29 14:13   ` David Howells
2023-03-29 14:13   ` David Howells
2023-03-29 14:23   ` Hannes Reinecke
2023-03-29 14:23     ` Hannes Reinecke
2023-03-29 14:23     ` Hannes Reinecke
2023-03-29 14:23     ` Hannes Reinecke
2023-03-29 14:39   ` David Howells
2023-03-29 14:39     ` David Howells
2023-03-29 14:39     ` David Howells
2023-03-29 14:39     ` David Howells

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3A132FA8-A764-416E-9753-08E368D6877A@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=anna@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=brauner@kernel.org \
    --cc=davem@davemloft.net \
    --cc=dhowells@redhat.com \
    --cc=edumazet@google.com \
    --cc=hch@infradead.org \
    --cc=jlayton@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=trond.myklebust@hammerspace.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.