All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 net-next 0/2] tcp: mmap: rework zerocopy receive
@ 2018-04-26 14:50 Eric Dumazet
  2018-04-26 14:50 ` [PATCH v3 net-next 1/2] tcp: add TCP_ZEROCOPY_RECEIVE support for " Eric Dumazet
  2018-04-26 14:50 ` [PATCH v3 net-next 2/2] selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE Eric Dumazet
  0 siblings, 2 replies; 5+ messages in thread
From: Eric Dumazet @ 2018-04-26 14:50 UTC (permalink / raw)
  To: David S . Miller
  Cc: netdev, Andy Lutomirski, linux-kernel, linux-mm, Ka-Cheong Poon,
	Eric Dumazet, Eric Dumazet

syzbot reported a lockdep issue caused by tcp mmap() support.

I implemented Andy Lutomirski nice suggestions to resolve the
issue and increase scalability as well.

First patch is adding a new getsockopt() operation and changes mmap()
behavior.

Second patch changes tcp_mmap reference program.

v3: change TCP_ZEROCOPY_RECEIVE to be a getsockopt() option
    instead of setsockopt(), feedback from Ka-Cheon Poon

v2: Added a missing page align of zc->length in tcp_zerocopy_receive()
    Properly clear zc->recv_skip_hint in case user request was completed.

Eric Dumazet (2):
  tcp: add TCP_ZEROCOPY_RECEIVE support for zerocopy receive
  selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE

 include/uapi/linux/tcp.h               |   8 ++
 net/ipv4/tcp.c                         | 192 +++++++++++++------------
 tools/testing/selftests/net/tcp_mmap.c |  64 +++++----
 3 files changed, 146 insertions(+), 118 deletions(-)

-- 
2.17.0.484.g0c8726318c-goog

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 net-next 1/2] tcp: add TCP_ZEROCOPY_RECEIVE support for zerocopy receive
  2018-04-26 14:50 [PATCH v3 net-next 0/2] tcp: mmap: rework zerocopy receive Eric Dumazet
@ 2018-04-26 14:50 ` Eric Dumazet
  2018-04-26 14:58   ` Soheil Hassas Yeganeh
  2018-04-26 14:50 ` [PATCH v3 net-next 2/2] selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE Eric Dumazet
  1 sibling, 1 reply; 5+ messages in thread
From: Eric Dumazet @ 2018-04-26 14:50 UTC (permalink / raw)
  To: David S . Miller
  Cc: netdev, Andy Lutomirski, linux-kernel, linux-mm, Ka-Cheong Poon,
	Eric Dumazet, Eric Dumazet, Soheil Hassas Yeganeh

When adding tcp mmap() implementation, I forgot that socket lock
had to be taken before current->mm->mmap_sem. syzbot eventually caught
the bug.

Since we can not lock the socket in tcp mmap() handler we have to
split the operation in two phases.

1) mmap() on a tcp socket simply reserves VMA space, and nothing else.
  This operation does not involve any TCP locking.

2) getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) implements
 the transfert of pages from skbs to one VMA.
  This operation only uses down_read(&current->mm->mmap_sem) after
  holding TCP lock, thus solving the lockdep issue.

This new implementation was suggested by Andy Lutomirski with great details.

Benefits are :

- Better scalability, in case multiple threads reuse VMAS
   (without mmap()/munmap() calls) since mmap_sem wont be write locked.

- Better error recovery.
   The previous mmap() model had to provide the expected size of the
   mapping. If for some reason one part could not be mapped (partial MSS),
   the whole operation had to be aborted.
   With the tcp_zerocopy_receive struct, kernel can report how
   many bytes were successfuly mapped, and how many bytes should
   be read to skip the problematic sequence.

- No more memory allocation to hold an array of page pointers.
  16 MB mappings needed 32 KB for this array, potentially using vmalloc() :/

- skbs are freed while mmap_sem has been released

Following patch makes the change in tcp_mmap tool to demonstrate
one possible use of mmap() and setsockopt(... TCP_ZEROCOPY_RECEIVE ...)

Note that memcg might require additional changes.

Fixes: 93ab6cc69162 ("tcp: implement mmap() for zero copy receive")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Cc: linux-mm@kvack.org
Cc: Soheil Hassas Yeganeh <soheil@google.com>
---
 include/uapi/linux/tcp.h |   8 ++
 net/ipv4/tcp.c           | 192 ++++++++++++++++++++-------------------
 2 files changed, 109 insertions(+), 91 deletions(-)

diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h
index 379b08700a542d49bbce9b4b49b17879d00b69bb..e9e8373b34b9ddc735329341b91f455bf5c0b17c 100644
--- a/include/uapi/linux/tcp.h
+++ b/include/uapi/linux/tcp.h
@@ -122,6 +122,7 @@ enum {
 #define TCP_MD5SIG_EXT		32	/* TCP MD5 Signature with extensions */
 #define TCP_FASTOPEN_KEY	33	/* Set the key for Fast Open (cookie) */
 #define TCP_FASTOPEN_NO_COOKIE	34	/* Enable TFO without a TFO cookie */
+#define TCP_ZEROCOPY_RECEIVE	35
 
 struct tcp_repair_opt {
 	__u32	opt_code;
@@ -276,4 +277,11 @@ struct tcp_diag_md5sig {
 	__u8	tcpm_key[TCP_MD5SIG_MAXKEYLEN];
 };
 
+/* setsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) */
+
+struct tcp_zerocopy_receive {
+	__u64 address;		/* in: address of mapping */
+	__u32 length;		/* in/out: number of bytes to map/mapped */
+	__u32 recv_skip_hint;	/* out: amount of bytes to skip */
+};
 #endif /* _UAPI_LINUX_TCP_H */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index dfd090ea54ad47112fc23c61180b5bf8edd2c736..c10c4a41ad39d6f8ae472882b243c2b70c915546 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1726,118 +1726,111 @@ int tcp_set_rcvlowat(struct sock *sk, int val)
 }
 EXPORT_SYMBOL(tcp_set_rcvlowat);
 
-/* When user wants to mmap X pages, we first need to perform the mapping
- * before freeing any skbs in receive queue, otherwise user would be unable
- * to fallback to standard recvmsg(). This happens if some data in the
- * requested block is not exactly fitting in a page.
- *
- * We only support order-0 pages for the moment.
- * mmap() on TCP is very strict, there is no point
- * trying to accommodate with pathological layouts.
- */
+static const struct vm_operations_struct tcp_vm_ops = {
+};
+
 int tcp_mmap(struct file *file, struct socket *sock,
 	     struct vm_area_struct *vma)
 {
-	unsigned long size = vma->vm_end - vma->vm_start;
-	unsigned int nr_pages = size >> PAGE_SHIFT;
-	struct page **pages_array = NULL;
-	u32 seq, len, offset, nr = 0;
-	struct sock *sk = sock->sk;
-	const skb_frag_t *frags;
+	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
+		return -EPERM;
+	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
+
+	/* Instruct vm_insert_page() to not down_read(mmap_sem) */
+	vma->vm_flags |= VM_MIXEDMAP;
+
+	vma->vm_ops = &tcp_vm_ops;
+	return 0;
+}
+EXPORT_SYMBOL(tcp_mmap);
+
+static int tcp_zerocopy_receive(struct sock *sk,
+				struct tcp_zerocopy_receive *zc)
+{
+	unsigned long address = (unsigned long)zc->address;
+	const skb_frag_t *frags = NULL;
+	u32 length = 0, seq, offset;
+	struct vm_area_struct *vma;
+	struct sk_buff *skb = NULL;
 	struct tcp_sock *tp;
-	struct sk_buff *skb;
 	int ret;
 
-	if (vma->vm_pgoff || !nr_pages)
+	if (address & (PAGE_SIZE - 1) || address != zc->address)
 		return -EINVAL;
 
-	if (vma->vm_flags & VM_WRITE)
-		return -EPERM;
-	/* TODO: Maybe the following is not needed if pages are COW */
-	vma->vm_flags &= ~VM_MAYWRITE;
-
-	lock_sock(sk);
-
-	ret = -ENOTCONN;
 	if (sk->sk_state == TCP_LISTEN)
-		goto out;
+		return -ENOTCONN;
 
 	sock_rps_record_flow(sk);
 
-	if (tcp_inq(sk) < size) {
-		ret = sock_flag(sk, SOCK_DONE) ? -EIO : -EAGAIN;
+	down_read(&current->mm->mmap_sem);
+
+	ret = -EINVAL;
+	vma = find_vma(current->mm, address);
+	if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops)
 		goto out;
-	}
+	zc->length = min_t(unsigned long, zc->length, vma->vm_end - address);
+
 	tp = tcp_sk(sk);
 	seq = tp->copied_seq;
-	/* Abort if urgent data is in the area */
-	if (unlikely(tp->urg_data)) {
-		u32 urg_offset = tp->urg_seq - seq;
+	zc->length = min_t(u32, zc->length, tcp_inq(sk));
+	zc->length &= ~(PAGE_SIZE - 1);
 
-		ret = -EINVAL;
-		if (urg_offset < size)
-			goto out;
-	}
-	ret = -ENOMEM;
-	pages_array = kvmalloc_array(nr_pages, sizeof(struct page *),
-				     GFP_KERNEL);
-	if (!pages_array)
-		goto out;
-	skb = tcp_recv_skb(sk, seq, &offset);
-	ret = -EINVAL;
-skb_start:
-	/* We do not support anything not in page frags */
-	offset -= skb_headlen(skb);
-	if ((int)offset < 0)
-		goto out;
-	if (skb_has_frag_list(skb))
-		goto out;
-	len = skb->data_len - offset;
-	frags = skb_shinfo(skb)->frags;
-	while (offset) {
-		if (frags->size > offset)
-			goto out;
-		offset -= frags->size;
-		frags++;
-	}
-	while (nr < nr_pages) {
-		if (len) {
-			if (len < PAGE_SIZE)
-				goto out;
-			if (frags->size != PAGE_SIZE || frags->page_offset)
-				goto out;
-			pages_array[nr++] = skb_frag_page(frags);
-			frags++;
-			len -= PAGE_SIZE;
-			seq += PAGE_SIZE;
-			continue;
-		}
-		skb = skb->next;
-		offset = seq - TCP_SKB_CB(skb)->seq;
-		goto skb_start;
-	}
-	/* OK, we have a full set of pages ready to be inserted into vma */
-	for (nr = 0; nr < nr_pages; nr++) {
-		ret = vm_insert_page(vma, vma->vm_start + (nr << PAGE_SHIFT),
-				     pages_array[nr]);
-		if (ret)
-			goto out;
-	}
-	/* operation is complete, we can 'consume' all skbs */
-	tp->copied_seq = seq;
-	tcp_rcv_space_adjust(sk);
-
-	/* Clean up data we have read: This will do ACK frames. */
-	tcp_recv_skb(sk, seq, &offset);
-	tcp_cleanup_rbuf(sk, size);
+	zap_page_range(vma, address, zc->length);
 
+	zc->recv_skip_hint = 0;
 	ret = 0;
+	while (length + PAGE_SIZE <= zc->length) {
+		if (zc->recv_skip_hint < PAGE_SIZE) {
+			if (skb) {
+				skb = skb->next;
+				offset = seq - TCP_SKB_CB(skb)->seq;
+			} else {
+				skb = tcp_recv_skb(sk, seq, &offset);
+			}
+
+			zc->recv_skip_hint = skb->len - offset;
+			offset -= skb_headlen(skb);
+			if ((int)offset < 0 || skb_has_frag_list(skb))
+				break;
+			frags = skb_shinfo(skb)->frags;
+			while (offset) {
+				if (frags->size > offset)
+					goto out;
+				offset -= frags->size;
+				frags++;
+			}
+		}
+		if (frags->size != PAGE_SIZE || frags->page_offset)
+			break;
+		ret = vm_insert_page(vma, address + length,
+				     skb_frag_page(frags));
+		if (ret)
+			break;
+		length += PAGE_SIZE;
+		seq += PAGE_SIZE;
+		zc->recv_skip_hint -= PAGE_SIZE;
+		frags++;
+	}
 out:
-	release_sock(sk);
-	kvfree(pages_array);
+	up_read(&current->mm->mmap_sem);
+	if (length) {
+		tp->copied_seq = seq;
+		tcp_rcv_space_adjust(sk);
+
+		/* Clean up data we have read: This will do ACK frames. */
+		tcp_recv_skb(sk, seq, &offset);
+		tcp_cleanup_rbuf(sk, length);
+		ret = 0;
+		if (length == zc->length)
+			zc->recv_skip_hint = 0;
+	} else {
+		if (!zc->recv_skip_hint && sock_flag(sk, SOCK_DONE))
+			ret = -EIO;
+	}
+	zc->length = length;
 	return ret;
 }
-EXPORT_SYMBOL(tcp_mmap);
 
 static void tcp_update_recv_tstamps(struct sk_buff *skb,
 				    struct scm_timestamping *tss)
@@ -3472,6 +3465,23 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
 		}
 		return 0;
 	}
+	case TCP_ZEROCOPY_RECEIVE: {
+		struct tcp_zerocopy_receive zc;
+		int err;
+
+		if (get_user(len, optlen))
+			return -EFAULT;
+		if (len != sizeof(zc))
+			return -EINVAL;
+		if (copy_from_user(&zc, optval, len))
+			return -EFAULT;
+		lock_sock(sk);
+		err = tcp_zerocopy_receive(sk, &zc);
+		release_sock(sk);
+		if (!err && copy_to_user(optval, &zc, len))
+			err = -EFAULT;
+		return err;
+	}
 	default:
 		return -ENOPROTOOPT;
 	}
-- 
2.17.0.484.g0c8726318c-goog

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 net-next 2/2] selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE
  2018-04-26 14:50 [PATCH v3 net-next 0/2] tcp: mmap: rework zerocopy receive Eric Dumazet
  2018-04-26 14:50 ` [PATCH v3 net-next 1/2] tcp: add TCP_ZEROCOPY_RECEIVE support for " Eric Dumazet
@ 2018-04-26 14:50 ` Eric Dumazet
  2018-04-26 14:57   ` Soheil Hassas Yeganeh
  1 sibling, 1 reply; 5+ messages in thread
From: Eric Dumazet @ 2018-04-26 14:50 UTC (permalink / raw)
  To: David S . Miller
  Cc: netdev, Andy Lutomirski, linux-kernel, linux-mm, Ka-Cheong Poon,
	Eric Dumazet, Eric Dumazet, Soheil Hassas Yeganeh

After prior kernel change, mmap() on TCP socket only reserves VMA.

We have to use getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...)
to perform the transfert of pages from skbs in TCP receive queue into such VMA.

struct tcp_zerocopy_receive {
	__u64 address;		/* in: address of mapping */
	__u32 length;		/* in/out: number of bytes to map/mapped */
	__u32 recv_skip_hint;	/* out: amount of bytes to skip */
};

After a successful getsockopt(...TCP_ZEROCOPY_RECEIVE...), @length contains
number of bytes that were mapped, and @recv_skip_hint contains number of bytes
that should be read using conventional read()/recv()/recvmsg() system calls,
to skip a sequence of bytes that can not be mapped, because not properly page
aligned.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
---
 tools/testing/selftests/net/tcp_mmap.c | 64 +++++++++++++++-----------
 1 file changed, 37 insertions(+), 27 deletions(-)

diff --git a/tools/testing/selftests/net/tcp_mmap.c b/tools/testing/selftests/net/tcp_mmap.c
index dea342fe6f4e88b5709d2ac37b2fc9a2a320bf44..77f762780199ff1f69f9f6b3f18e72deddb69f5e 100644
--- a/tools/testing/selftests/net/tcp_mmap.c
+++ b/tools/testing/selftests/net/tcp_mmap.c
@@ -76,9 +76,10 @@
 #include <time.h>
 #include <sys/time.h>
 #include <netinet/in.h>
-#include <netinet/tcp.h>
 #include <arpa/inet.h>
 #include <poll.h>
+#include <linux/tcp.h>
+#include <assert.h>
 
 #ifndef MSG_ZEROCOPY
 #define MSG_ZEROCOPY    0x4000000
@@ -134,11 +135,12 @@ void hash_zone(void *zone, unsigned int length)
 void *child_thread(void *arg)
 {
 	unsigned long total_mmap = 0, total = 0;
+	struct tcp_zerocopy_receive zc;
 	unsigned long delta_usec;
 	int flags = MAP_SHARED;
 	struct timeval t0, t1;
 	char *buffer = NULL;
-	void *oaddr = NULL;
+	void *addr = NULL;
 	double throughput;
 	struct rusage ru;
 	int lu, fd;
@@ -153,41 +155,46 @@ void *child_thread(void *arg)
 		perror("malloc");
 		goto error;
 	}
+	if (zflg) {
+		addr = mmap(NULL, chunk_size, PROT_READ, flags, fd, 0);
+		if (addr == (void *)-1)
+			zflg = 0;
+	}
 	while (1) {
 		struct pollfd pfd = { .fd = fd, .events = POLLIN, };
 		int sub;
 
 		poll(&pfd, 1, 10000);
 		if (zflg) {
-			void *naddr;
+			socklen_t zc_len = sizeof(zc);
+			int res;
 
-			naddr = mmap(oaddr, chunk_size, PROT_READ, flags, fd, 0);
-			if (naddr == (void *)-1) {
-				if (errno == EAGAIN) {
-					/* That is if SO_RCVLOWAT is buggy */
-					usleep(1000);
-					continue;
-				}
-				if (errno == EINVAL) {
-					flags = MAP_SHARED;
-					oaddr = NULL;
-					goto fallback;
-				}
-				if (errno != EIO)
-					perror("mmap()");
+			zc.address = (__u64)addr;
+			zc.length = chunk_size;
+			zc.recv_skip_hint = 0;
+			res = getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE,
+					 &zc, &zc_len);
+			if (res == -1)
 				break;
+
+			if (zc.length) {
+				assert(zc.length <= chunk_size);
+				total_mmap += zc.length;
+				if (xflg)
+					hash_zone(addr, zc.length);
+				total += zc.length;
 			}
-			total_mmap += chunk_size;
-			if (xflg)
-				hash_zone(naddr, chunk_size);
-			total += chunk_size;
-			if (!keepflag) {
-				flags |= MAP_FIXED;
-				oaddr = naddr;
+			if (zc.recv_skip_hint) {
+				assert(zc.recv_skip_hint <= chunk_size);
+				lu = read(fd, buffer, zc.recv_skip_hint);
+				if (lu > 0) {
+					if (xflg)
+						hash_zone(buffer, lu);
+					total += lu;
+				}
 			}
 			continue;
 		}
-fallback:
 		sub = 0;
 		while (sub < chunk_size) {
 			lu = read(fd, buffer + sub, chunk_size - sub);
@@ -228,6 +235,8 @@ void *child_thread(void *arg)
 error:
 	free(buffer);
 	close(fd);
+	if (zflg)
+		munmap(addr, chunk_size);
 	pthread_exit(0);
 }
 
@@ -371,7 +380,8 @@ int main(int argc, char *argv[])
 		setup_sockaddr(cfg_family, host, &listenaddr);
 
 		if (mss &&
-		    setsockopt(fdlisten, SOL_TCP, TCP_MAXSEG, &mss, sizeof(mss)) == -1) {
+		    setsockopt(fdlisten, IPPROTO_TCP, TCP_MAXSEG,
+			       &mss, sizeof(mss)) == -1) {
 			perror("setsockopt TCP_MAXSEG");
 			exit(1);
 		}
@@ -402,7 +412,7 @@ int main(int argc, char *argv[])
 	setup_sockaddr(cfg_family, host, &addr);
 
 	if (mss &&
-	    setsockopt(fd, SOL_TCP, TCP_MAXSEG, &mss, sizeof(mss)) == -1) {
+	    setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, &mss, sizeof(mss)) == -1) {
 		perror("setsockopt TCP_MAXSEG");
 		exit(1);
 	}
-- 
2.17.0.484.g0c8726318c-goog

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 net-next 2/2] selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE
  2018-04-26 14:50 ` [PATCH v3 net-next 2/2] selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE Eric Dumazet
@ 2018-04-26 14:57   ` Soheil Hassas Yeganeh
  0 siblings, 0 replies; 5+ messages in thread
From: Soheil Hassas Yeganeh @ 2018-04-26 14:57 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, netdev, Andy Lutomirski, linux-kernel,
	linux-mm, Ka-Cheong Poon, Eric Dumazet

On Thu, Apr 26, 2018 at 10:50 AM, Eric Dumazet <edumazet@google.com> wrote:
> After prior kernel change, mmap() on TCP socket only reserves VMA.
>
> We have to use getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...)
> to perform the transfert of pages from skbs in TCP receive queue into such VMA.
>
> struct tcp_zerocopy_receive {
>         __u64 address;          /* in: address of mapping */
>         __u32 length;           /* in/out: number of bytes to map/mapped */
>         __u32 recv_skip_hint;   /* out: amount of bytes to skip */
> };
>
> After a successful getsockopt(...TCP_ZEROCOPY_RECEIVE...), @length contains
> number of bytes that were mapped, and @recv_skip_hint contains number of bytes
> that should be read using conventional read()/recv()/recvmsg() system calls,
> to skip a sequence of bytes that can not be mapped, because not properly page
> aligned.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Soheil Hassas Yeganeh <soheil@google.com>

Acked-by: Soheil Hassas Yeganeh <soheil@google.com>

Thank you, again!

> ---
>  tools/testing/selftests/net/tcp_mmap.c | 64 +++++++++++++++-----------
>  1 file changed, 37 insertions(+), 27 deletions(-)
>
> diff --git a/tools/testing/selftests/net/tcp_mmap.c b/tools/testing/selftests/net/tcp_mmap.c
> index dea342fe6f4e88b5709d2ac37b2fc9a2a320bf44..77f762780199ff1f69f9f6b3f18e72deddb69f5e 100644
> --- a/tools/testing/selftests/net/tcp_mmap.c
> +++ b/tools/testing/selftests/net/tcp_mmap.c
> @@ -76,9 +76,10 @@
>  #include <time.h>
>  #include <sys/time.h>
>  #include <netinet/in.h>
> -#include <netinet/tcp.h>
>  #include <arpa/inet.h>
>  #include <poll.h>
> +#include <linux/tcp.h>
> +#include <assert.h>
>
>  #ifndef MSG_ZEROCOPY
>  #define MSG_ZEROCOPY    0x4000000
> @@ -134,11 +135,12 @@ void hash_zone(void *zone, unsigned int length)
>  void *child_thread(void *arg)
>  {
>         unsigned long total_mmap = 0, total = 0;
> +       struct tcp_zerocopy_receive zc;
>         unsigned long delta_usec;
>         int flags = MAP_SHARED;
>         struct timeval t0, t1;
>         char *buffer = NULL;
> -       void *oaddr = NULL;
> +       void *addr = NULL;
>         double throughput;
>         struct rusage ru;
>         int lu, fd;
> @@ -153,41 +155,46 @@ void *child_thread(void *arg)
>                 perror("malloc");
>                 goto error;
>         }
> +       if (zflg) {
> +               addr = mmap(NULL, chunk_size, PROT_READ, flags, fd, 0);
> +               if (addr == (void *)-1)
> +                       zflg = 0;
> +       }
>         while (1) {
>                 struct pollfd pfd = { .fd = fd, .events = POLLIN, };
>                 int sub;
>
>                 poll(&pfd, 1, 10000);
>                 if (zflg) {
> -                       void *naddr;
> +                       socklen_t zc_len = sizeof(zc);
> +                       int res;
>
> -                       naddr = mmap(oaddr, chunk_size, PROT_READ, flags, fd, 0);
> -                       if (naddr == (void *)-1) {
> -                               if (errno == EAGAIN) {
> -                                       /* That is if SO_RCVLOWAT is buggy */
> -                                       usleep(1000);
> -                                       continue;
> -                               }
> -                               if (errno == EINVAL) {
> -                                       flags = MAP_SHARED;
> -                                       oaddr = NULL;
> -                                       goto fallback;
> -                               }
> -                               if (errno != EIO)
> -                                       perror("mmap()");
> +                       zc.address = (__u64)addr;
> +                       zc.length = chunk_size;
> +                       zc.recv_skip_hint = 0;
> +                       res = getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE,
> +                                        &zc, &zc_len);
> +                       if (res == -1)
>                                 break;
> +
> +                       if (zc.length) {
> +                               assert(zc.length <= chunk_size);
> +                               total_mmap += zc.length;
> +                               if (xflg)
> +                                       hash_zone(addr, zc.length);
> +                               total += zc.length;
>                         }
> -                       total_mmap += chunk_size;
> -                       if (xflg)
> -                               hash_zone(naddr, chunk_size);
> -                       total += chunk_size;
> -                       if (!keepflag) {
> -                               flags |= MAP_FIXED;
> -                               oaddr = naddr;
> +                       if (zc.recv_skip_hint) {
> +                               assert(zc.recv_skip_hint <= chunk_size);
> +                               lu = read(fd, buffer, zc.recv_skip_hint);
> +                               if (lu > 0) {
> +                                       if (xflg)
> +                                               hash_zone(buffer, lu);
> +                                       total += lu;
> +                               }
>                         }
>                         continue;
>                 }
> -fallback:
>                 sub = 0;
>                 while (sub < chunk_size) {
>                         lu = read(fd, buffer + sub, chunk_size - sub);
> @@ -228,6 +235,8 @@ void *child_thread(void *arg)
>  error:
>         free(buffer);
>         close(fd);
> +       if (zflg)
> +               munmap(addr, chunk_size);
>         pthread_exit(0);
>  }
>
> @@ -371,7 +380,8 @@ int main(int argc, char *argv[])
>                 setup_sockaddr(cfg_family, host, &listenaddr);
>
>                 if (mss &&
> -                   setsockopt(fdlisten, SOL_TCP, TCP_MAXSEG, &mss, sizeof(mss)) == -1) {
> +                   setsockopt(fdlisten, IPPROTO_TCP, TCP_MAXSEG,
> +                              &mss, sizeof(mss)) == -1) {
>                         perror("setsockopt TCP_MAXSEG");
>                         exit(1);
>                 }
> @@ -402,7 +412,7 @@ int main(int argc, char *argv[])
>         setup_sockaddr(cfg_family, host, &addr);
>
>         if (mss &&
> -           setsockopt(fd, SOL_TCP, TCP_MAXSEG, &mss, sizeof(mss)) == -1) {
> +           setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, &mss, sizeof(mss)) == -1) {
>                 perror("setsockopt TCP_MAXSEG");
>                 exit(1);
>         }
> --
> 2.17.0.484.g0c8726318c-goog
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 net-next 1/2] tcp: add TCP_ZEROCOPY_RECEIVE support for zerocopy receive
  2018-04-26 14:50 ` [PATCH v3 net-next 1/2] tcp: add TCP_ZEROCOPY_RECEIVE support for " Eric Dumazet
@ 2018-04-26 14:58   ` Soheil Hassas Yeganeh
  0 siblings, 0 replies; 5+ messages in thread
From: Soheil Hassas Yeganeh @ 2018-04-26 14:58 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, netdev, Andy Lutomirski, linux-kernel,
	linux-mm, Ka-Cheong Poon, Eric Dumazet

On Thu, Apr 26, 2018 at 10:50 AM, Eric Dumazet <edumazet@google.com> wrote:
> When adding tcp mmap() implementation, I forgot that socket lock
> had to be taken before current->mm->mmap_sem. syzbot eventually caught
> the bug.
>
> Since we can not lock the socket in tcp mmap() handler we have to
> split the operation in two phases.
>
> 1) mmap() on a tcp socket simply reserves VMA space, and nothing else.
>   This operation does not involve any TCP locking.
>
> 2) getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) implements
>  the transfert of pages from skbs to one VMA.
>   This operation only uses down_read(&current->mm->mmap_sem) after
>   holding TCP lock, thus solving the lockdep issue.
>
> This new implementation was suggested by Andy Lutomirski with great details.
>
> Benefits are :
>
> - Better scalability, in case multiple threads reuse VMAS
>    (without mmap()/munmap() calls) since mmap_sem wont be write locked.
>
> - Better error recovery.
>    The previous mmap() model had to provide the expected size of the
>    mapping. If for some reason one part could not be mapped (partial MSS),
>    the whole operation had to be aborted.
>    With the tcp_zerocopy_receive struct, kernel can report how
>    many bytes were successfuly mapped, and how many bytes should
>    be read to skip the problematic sequence.
>
> - No more memory allocation to hold an array of page pointers.
>   16 MB mappings needed 32 KB for this array, potentially using vmalloc() :/
>
> - skbs are freed while mmap_sem has been released
>
> Following patch makes the change in tcp_mmap tool to demonstrate
> one possible use of mmap() and setsockopt(... TCP_ZEROCOPY_RECEIVE ...)
>
> Note that memcg might require additional changes.
>
> Fixes: 93ab6cc69162 ("tcp: implement mmap() for zero copy receive")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Reported-by: syzbot <syzkaller@googlegroups.com>
> Suggested-by: Andy Lutomirski <luto@kernel.org>
> Cc: linux-mm@kvack.org
> Cc: Soheil Hassas Yeganeh <soheil@google.com>

Acked-by: Soheil Hassas Yeganeh <soheil@google.com>

getsockopt() is indeed a better choice. Thanks!

> ---
>  include/uapi/linux/tcp.h |   8 ++
>  net/ipv4/tcp.c           | 192 ++++++++++++++++++++-------------------
>  2 files changed, 109 insertions(+), 91 deletions(-)
>
> diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h
> index 379b08700a542d49bbce9b4b49b17879d00b69bb..e9e8373b34b9ddc735329341b91f455bf5c0b17c 100644
> --- a/include/uapi/linux/tcp.h
> +++ b/include/uapi/linux/tcp.h
> @@ -122,6 +122,7 @@ enum {
>  #define TCP_MD5SIG_EXT         32      /* TCP MD5 Signature with extensions */
>  #define TCP_FASTOPEN_KEY       33      /* Set the key for Fast Open (cookie) */
>  #define TCP_FASTOPEN_NO_COOKIE 34      /* Enable TFO without a TFO cookie */
> +#define TCP_ZEROCOPY_RECEIVE   35
>
>  struct tcp_repair_opt {
>         __u32   opt_code;
> @@ -276,4 +277,11 @@ struct tcp_diag_md5sig {
>         __u8    tcpm_key[TCP_MD5SIG_MAXKEYLEN];
>  };
>
> +/* setsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) */
> +
> +struct tcp_zerocopy_receive {
> +       __u64 address;          /* in: address of mapping */
> +       __u32 length;           /* in/out: number of bytes to map/mapped */
> +       __u32 recv_skip_hint;   /* out: amount of bytes to skip */
> +};
>  #endif /* _UAPI_LINUX_TCP_H */
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index dfd090ea54ad47112fc23c61180b5bf8edd2c736..c10c4a41ad39d6f8ae472882b243c2b70c915546 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1726,118 +1726,111 @@ int tcp_set_rcvlowat(struct sock *sk, int val)
>  }
>  EXPORT_SYMBOL(tcp_set_rcvlowat);
>
> -/* When user wants to mmap X pages, we first need to perform the mapping
> - * before freeing any skbs in receive queue, otherwise user would be unable
> - * to fallback to standard recvmsg(). This happens if some data in the
> - * requested block is not exactly fitting in a page.
> - *
> - * We only support order-0 pages for the moment.
> - * mmap() on TCP is very strict, there is no point
> - * trying to accommodate with pathological layouts.
> - */
> +static const struct vm_operations_struct tcp_vm_ops = {
> +};
> +
>  int tcp_mmap(struct file *file, struct socket *sock,
>              struct vm_area_struct *vma)
>  {
> -       unsigned long size = vma->vm_end - vma->vm_start;
> -       unsigned int nr_pages = size >> PAGE_SHIFT;
> -       struct page **pages_array = NULL;
> -       u32 seq, len, offset, nr = 0;
> -       struct sock *sk = sock->sk;
> -       const skb_frag_t *frags;
> +       if (vma->vm_flags & (VM_WRITE | VM_EXEC))
> +               return -EPERM;
> +       vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
> +
> +       /* Instruct vm_insert_page() to not down_read(mmap_sem) */
> +       vma->vm_flags |= VM_MIXEDMAP;
> +
> +       vma->vm_ops = &tcp_vm_ops;
> +       return 0;
> +}
> +EXPORT_SYMBOL(tcp_mmap);
> +
> +static int tcp_zerocopy_receive(struct sock *sk,
> +                               struct tcp_zerocopy_receive *zc)
> +{
> +       unsigned long address = (unsigned long)zc->address;
> +       const skb_frag_t *frags = NULL;
> +       u32 length = 0, seq, offset;
> +       struct vm_area_struct *vma;
> +       struct sk_buff *skb = NULL;
>         struct tcp_sock *tp;
> -       struct sk_buff *skb;
>         int ret;
>
> -       if (vma->vm_pgoff || !nr_pages)
> +       if (address & (PAGE_SIZE - 1) || address != zc->address)
>                 return -EINVAL;
>
> -       if (vma->vm_flags & VM_WRITE)
> -               return -EPERM;
> -       /* TODO: Maybe the following is not needed if pages are COW */
> -       vma->vm_flags &= ~VM_MAYWRITE;
> -
> -       lock_sock(sk);
> -
> -       ret = -ENOTCONN;
>         if (sk->sk_state == TCP_LISTEN)
> -               goto out;
> +               return -ENOTCONN;
>
>         sock_rps_record_flow(sk);
>
> -       if (tcp_inq(sk) < size) {
> -               ret = sock_flag(sk, SOCK_DONE) ? -EIO : -EAGAIN;
> +       down_read(&current->mm->mmap_sem);
> +
> +       ret = -EINVAL;
> +       vma = find_vma(current->mm, address);
> +       if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops)
>                 goto out;
> -       }
> +       zc->length = min_t(unsigned long, zc->length, vma->vm_end - address);
> +
>         tp = tcp_sk(sk);
>         seq = tp->copied_seq;
> -       /* Abort if urgent data is in the area */
> -       if (unlikely(tp->urg_data)) {
> -               u32 urg_offset = tp->urg_seq - seq;
> +       zc->length = min_t(u32, zc->length, tcp_inq(sk));
> +       zc->length &= ~(PAGE_SIZE - 1);
>
> -               ret = -EINVAL;
> -               if (urg_offset < size)
> -                       goto out;
> -       }
> -       ret = -ENOMEM;
> -       pages_array = kvmalloc_array(nr_pages, sizeof(struct page *),
> -                                    GFP_KERNEL);
> -       if (!pages_array)
> -               goto out;
> -       skb = tcp_recv_skb(sk, seq, &offset);
> -       ret = -EINVAL;
> -skb_start:
> -       /* We do not support anything not in page frags */
> -       offset -= skb_headlen(skb);
> -       if ((int)offset < 0)
> -               goto out;
> -       if (skb_has_frag_list(skb))
> -               goto out;
> -       len = skb->data_len - offset;
> -       frags = skb_shinfo(skb)->frags;
> -       while (offset) {
> -               if (frags->size > offset)
> -                       goto out;
> -               offset -= frags->size;
> -               frags++;
> -       }
> -       while (nr < nr_pages) {
> -               if (len) {
> -                       if (len < PAGE_SIZE)
> -                               goto out;
> -                       if (frags->size != PAGE_SIZE || frags->page_offset)
> -                               goto out;
> -                       pages_array[nr++] = skb_frag_page(frags);
> -                       frags++;
> -                       len -= PAGE_SIZE;
> -                       seq += PAGE_SIZE;
> -                       continue;
> -               }
> -               skb = skb->next;
> -               offset = seq - TCP_SKB_CB(skb)->seq;
> -               goto skb_start;
> -       }
> -       /* OK, we have a full set of pages ready to be inserted into vma */
> -       for (nr = 0; nr < nr_pages; nr++) {
> -               ret = vm_insert_page(vma, vma->vm_start + (nr << PAGE_SHIFT),
> -                                    pages_array[nr]);
> -               if (ret)
> -                       goto out;
> -       }
> -       /* operation is complete, we can 'consume' all skbs */
> -       tp->copied_seq = seq;
> -       tcp_rcv_space_adjust(sk);
> -
> -       /* Clean up data we have read: This will do ACK frames. */
> -       tcp_recv_skb(sk, seq, &offset);
> -       tcp_cleanup_rbuf(sk, size);
> +       zap_page_range(vma, address, zc->length);
>
> +       zc->recv_skip_hint = 0;
>         ret = 0;
> +       while (length + PAGE_SIZE <= zc->length) {
> +               if (zc->recv_skip_hint < PAGE_SIZE) {
> +                       if (skb) {
> +                               skb = skb->next;
> +                               offset = seq - TCP_SKB_CB(skb)->seq;
> +                       } else {
> +                               skb = tcp_recv_skb(sk, seq, &offset);
> +                       }
> +
> +                       zc->recv_skip_hint = skb->len - offset;
> +                       offset -= skb_headlen(skb);
> +                       if ((int)offset < 0 || skb_has_frag_list(skb))
> +                               break;
> +                       frags = skb_shinfo(skb)->frags;
> +                       while (offset) {
> +                               if (frags->size > offset)
> +                                       goto out;
> +                               offset -= frags->size;
> +                               frags++;
> +                       }
> +               }
> +               if (frags->size != PAGE_SIZE || frags->page_offset)
> +                       break;
> +               ret = vm_insert_page(vma, address + length,
> +                                    skb_frag_page(frags));
> +               if (ret)
> +                       break;
> +               length += PAGE_SIZE;
> +               seq += PAGE_SIZE;
> +               zc->recv_skip_hint -= PAGE_SIZE;
> +               frags++;
> +       }
>  out:
> -       release_sock(sk);
> -       kvfree(pages_array);
> +       up_read(&current->mm->mmap_sem);
> +       if (length) {
> +               tp->copied_seq = seq;
> +               tcp_rcv_space_adjust(sk);
> +
> +               /* Clean up data we have read: This will do ACK frames. */
> +               tcp_recv_skb(sk, seq, &offset);
> +               tcp_cleanup_rbuf(sk, length);
> +               ret = 0;
> +               if (length == zc->length)
> +                       zc->recv_skip_hint = 0;
> +       } else {
> +               if (!zc->recv_skip_hint && sock_flag(sk, SOCK_DONE))
> +                       ret = -EIO;
> +       }
> +       zc->length = length;
>         return ret;
>  }
> -EXPORT_SYMBOL(tcp_mmap);
>
>  static void tcp_update_recv_tstamps(struct sk_buff *skb,
>                                     struct scm_timestamping *tss)
> @@ -3472,6 +3465,23 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
>                 }
>                 return 0;
>         }
> +       case TCP_ZEROCOPY_RECEIVE: {
> +               struct tcp_zerocopy_receive zc;
> +               int err;
> +
> +               if (get_user(len, optlen))
> +                       return -EFAULT;
> +               if (len != sizeof(zc))
> +                       return -EINVAL;
> +               if (copy_from_user(&zc, optval, len))
> +                       return -EFAULT;
> +               lock_sock(sk);
> +               err = tcp_zerocopy_receive(sk, &zc);
> +               release_sock(sk);
> +               if (!err && copy_to_user(optval, &zc, len))
> +                       err = -EFAULT;
> +               return err;
> +       }
>         default:
>                 return -ENOPROTOOPT;
>         }
> --
> 2.17.0.484.g0c8726318c-goog
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-04-26 14:59 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-26 14:50 [PATCH v3 net-next 0/2] tcp: mmap: rework zerocopy receive Eric Dumazet
2018-04-26 14:50 ` [PATCH v3 net-next 1/2] tcp: add TCP_ZEROCOPY_RECEIVE support for " Eric Dumazet
2018-04-26 14:58   ` Soheil Hassas Yeganeh
2018-04-26 14:50 ` [PATCH v3 net-next 2/2] selftests: net: tcp_mmap must use TCP_ZEROCOPY_RECEIVE Eric Dumazet
2018-04-26 14:57   ` Soheil Hassas Yeganeh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.