All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch bpf-next v3 0/4] sockmap: some performance optimizations
@ 2022-06-02  1:21 Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb() Cong Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Cong Wang @ 2022-06-02  1:21 UTC (permalink / raw)
  To: netdev; +Cc: bpf, Cong Wang

From: Cong Wang <cong.wang@bytedance.com>

This patchset contains two optimizations for sockmap. The first one
eliminates a skb_clone() and the second one eliminates a memset(). With 
this patchset, the throughput of UDP transmission via sockmap gets
improved by 61%.

v3: avoid touching tcp_recv_skb()

v2: clean up coding style for tcp_read_skb()
    get rid of some redundant variables
    add a comment for ->read_skb()

---

Cong Wang (4):
  tcp: introduce tcp_read_skb()
  net: introduce a new proto_ops ->read_skb()
  skmsg: get rid of skb_clone()
  skmsg: get rid of unncessary memset()

 include/linux/net.h |  4 ++++
 include/net/tcp.h   |  1 +
 include/net/udp.h   |  3 +--
 net/core/skmsg.c    | 48 +++++++++++++++++----------------------------
 net/ipv4/af_inet.c  |  3 ++-
 net/ipv4/tcp.c      | 44 +++++++++++++++++++++++++++++++++++++++++
 net/ipv4/udp.c      | 11 +++++------
 net/ipv6/af_inet6.c |  3 ++-
 net/unix/af_unix.c  | 23 +++++++++-------------
 9 files changed, 86 insertions(+), 54 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb()
  2022-06-02  1:21 [Patch bpf-next v3 0/4] sockmap: some performance optimizations Cong Wang
@ 2022-06-02  1:21 ` Cong Wang
  2022-06-09 15:08   ` John Fastabend
  2022-06-02  1:21 ` [Patch bpf-next v3 2/4] net: introduce a new proto_ops ->read_skb() Cong Wang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Cong Wang @ 2022-06-02  1:21 UTC (permalink / raw)
  To: netdev
  Cc: bpf, Cong Wang, Eric Dumazet, John Fastabend, Daniel Borkmann,
	Jakub Sitnicki

From: Cong Wang <cong.wang@bytedance.com>

This patch inroduces tcp_read_skb() based on tcp_read_sock(),
a preparation for the next patch which actually introduces
a new sock ops.

TCP is special here, because it has tcp_read_sock() which is
mainly used by splice(). tcp_read_sock() supports partial read
and arbitrary offset, neither of them is needed for sockmap.

Cc: Eric Dumazet <edumazet@google.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
---
 include/net/tcp.h |  2 ++
 net/ipv4/tcp.c    | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 1e99f5c61f84..878544d0f8f9 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -669,6 +669,8 @@ void tcp_get_info(struct sock *, struct tcp_info *);
 /* Read 'sendfile()'-style from a TCP socket */
 int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
 		  sk_read_actor_t recv_actor);
+int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
+		 sk_read_actor_t recv_actor);
 
 void tcp_initialize_rcv_mss(struct sock *sk);
 
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 9984d23a7f3e..a18e9ababf54 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1709,6 +1709,53 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
 }
 EXPORT_SYMBOL(tcp_read_sock);
 
+int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
+		 sk_read_actor_t recv_actor)
+{
+	struct tcp_sock *tp = tcp_sk(sk);
+	u32 seq = tp->copied_seq;
+	struct sk_buff *skb;
+	int copied = 0;
+	u32 offset;
+
+	if (sk->sk_state == TCP_LISTEN)
+		return -ENOTCONN;
+
+	while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
+		int used;
+
+		__skb_unlink(skb, &sk->sk_receive_queue);
+		used = recv_actor(desc, skb, 0, skb->len);
+		if (used <= 0) {
+			if (!copied)
+				copied = used;
+			break;
+		}
+		seq += used;
+		copied += used;
+
+		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
+			kfree_skb(skb);
+			++seq;
+			break;
+		}
+		kfree_skb(skb);
+		if (!desc->count)
+			break;
+		WRITE_ONCE(tp->copied_seq, seq);
+	}
+	WRITE_ONCE(tp->copied_seq, seq);
+
+	tcp_rcv_space_adjust(sk);
+
+	/* Clean up data we have read: This will do ACK frames. */
+	if (copied > 0)
+		tcp_cleanup_rbuf(sk, copied);
+
+	return copied;
+}
+EXPORT_SYMBOL(tcp_read_skb);
+
 int tcp_peek_len(struct socket *sock)
 {
 	return tcp_inq(sock->sk);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch bpf-next v3 2/4] net: introduce a new proto_ops ->read_skb()
  2022-06-02  1:21 [Patch bpf-next v3 0/4] sockmap: some performance optimizations Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb() Cong Wang
@ 2022-06-02  1:21 ` Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 3/4] skmsg: get rid of skb_clone() Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 4/4] skmsg: get rid of unncessary memset() Cong Wang
  3 siblings, 0 replies; 10+ messages in thread
From: Cong Wang @ 2022-06-02  1:21 UTC (permalink / raw)
  To: netdev
  Cc: bpf, Cong Wang, Eric Dumazet, John Fastabend, Daniel Borkmann,
	Jakub Sitnicki

From: Cong Wang <cong.wang@bytedance.com>

Currently both splice() and sockmap use ->read_sock() to
read skb from receive queue, but for sockmap we only read
one entire skb at a time, so ->read_sock() is too conservative
to use. Introduce a new proto_ops ->read_skb() which supports
this sematic, with this we can finally pass the ownership of
skb to recv actors.

For non-TCP protocols, all ->read_sock() can be simply
converted to ->read_skb().

Cc: Eric Dumazet <edumazet@google.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
---
 include/linux/net.h |  4 ++++
 include/net/tcp.h   |  3 +--
 include/net/udp.h   |  3 +--
 net/core/skmsg.c    | 20 +++++---------------
 net/ipv4/af_inet.c  |  3 ++-
 net/ipv4/tcp.c      |  9 +++------
 net/ipv4/udp.c      | 10 ++++------
 net/ipv6/af_inet6.c |  3 ++-
 net/unix/af_unix.c  | 23 +++++++++--------------
 9 files changed, 31 insertions(+), 47 deletions(-)

diff --git a/include/linux/net.h b/include/linux/net.h
index 12093f4db50c..a03485e8cbb2 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -152,6 +152,8 @@ struct module;
 struct sk_buff;
 typedef int (*sk_read_actor_t)(read_descriptor_t *, struct sk_buff *,
 			       unsigned int, size_t);
+typedef int (*skb_read_actor_t)(struct sock *, struct sk_buff *);
+
 
 struct proto_ops {
 	int		family;
@@ -214,6 +216,8 @@ struct proto_ops {
 	 */
 	int		(*read_sock)(struct sock *sk, read_descriptor_t *desc,
 				     sk_read_actor_t recv_actor);
+	/* This is different from read_sock(), it reads an entire skb at a time. */
+	int		(*read_skb)(struct sock *sk, skb_read_actor_t recv_actor);
 	int		(*sendpage_locked)(struct sock *sk, struct page *page,
 					   int offset, size_t size, int flags);
 	int		(*sendmsg_locked)(struct sock *sk, struct msghdr *msg,
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 878544d0f8f9..3aa859c9a0fb 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -669,8 +669,7 @@ void tcp_get_info(struct sock *, struct tcp_info *);
 /* Read 'sendfile()'-style from a TCP socket */
 int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
 		  sk_read_actor_t recv_actor);
-int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
-		 sk_read_actor_t recv_actor);
+int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
 
 void tcp_initialize_rcv_mss(struct sock *sk);
 
diff --git a/include/net/udp.h b/include/net/udp.h
index b83a00330566..47a0e3359771 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -305,8 +305,7 @@ struct sock *__udp6_lib_lookup(struct net *net,
 			       struct sk_buff *skb);
 struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb,
 				 __be16 sport, __be16 dport);
-int udp_read_sock(struct sock *sk, read_descriptor_t *desc,
-		  sk_read_actor_t recv_actor);
+int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
 
 /* UDP uses skb->dev_scratch to cache as much information as possible and avoid
  * possibly multiple cache miss on dequeue()
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 7e03f96e441b..f7f63b7d990c 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1160,21 +1160,17 @@ static void sk_psock_done_strp(struct sk_psock *psock)
 }
 #endif /* CONFIG_BPF_STREAM_PARSER */
 
-static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
-				 unsigned int offset, size_t orig_len)
+static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
 {
-	struct sock *sk = (struct sock *)desc->arg.data;
 	struct sk_psock *psock;
 	struct bpf_prog *prog;
 	int ret = __SK_DROP;
-	int len = orig_len;
+	int len = skb->len;
 
 	/* clone here so sk_eat_skb() in tcp_read_sock does not drop our data */
 	skb = skb_clone(skb, GFP_ATOMIC);
-	if (!skb) {
-		desc->error = -ENOMEM;
+	if (!skb)
 		return 0;
-	}
 
 	rcu_read_lock();
 	psock = sk_psock(sk);
@@ -1204,16 +1200,10 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
 static void sk_psock_verdict_data_ready(struct sock *sk)
 {
 	struct socket *sock = sk->sk_socket;
-	read_descriptor_t desc;
 
-	if (unlikely(!sock || !sock->ops || !sock->ops->read_sock))
+	if (unlikely(!sock || !sock->ops || !sock->ops->read_skb))
 		return;
-
-	desc.arg.data = sk;
-	desc.error = 0;
-	desc.count = 1;
-
-	sock->ops->read_sock(sk, &desc, sk_psock_verdict_recv);
+	sock->ops->read_skb(sk, sk_psock_verdict_recv);
 }
 
 void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock)
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 93da9f783bec..f615263855d0 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1040,6 +1040,7 @@ const struct proto_ops inet_stream_ops = {
 	.sendpage	   = inet_sendpage,
 	.splice_read	   = tcp_splice_read,
 	.read_sock	   = tcp_read_sock,
+	.read_skb	   = tcp_read_skb,
 	.sendmsg_locked    = tcp_sendmsg_locked,
 	.sendpage_locked   = tcp_sendpage_locked,
 	.peek_len	   = tcp_peek_len,
@@ -1067,7 +1068,7 @@ const struct proto_ops inet_dgram_ops = {
 	.setsockopt	   = sock_common_setsockopt,
 	.getsockopt	   = sock_common_getsockopt,
 	.sendmsg	   = inet_sendmsg,
-	.read_sock	   = udp_read_sock,
+	.read_skb	   = udp_read_skb,
 	.recvmsg	   = inet_recvmsg,
 	.mmap		   = sock_no_mmap,
 	.sendpage	   = inet_sendpage,
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index a18e9ababf54..8b9327a0d0d5 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1709,8 +1709,7 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
 }
 EXPORT_SYMBOL(tcp_read_sock);
 
-int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
-		 sk_read_actor_t recv_actor)
+int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	u32 seq = tp->copied_seq;
@@ -1725,7 +1724,7 @@ int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
 		int used;
 
 		__skb_unlink(skb, &sk->sk_receive_queue);
-		used = recv_actor(desc, skb, 0, skb->len);
+		used = recv_actor(sk, skb);
 		if (used <= 0) {
 			if (!copied)
 				copied = used;
@@ -1740,9 +1739,7 @@ int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
 			break;
 		}
 		kfree_skb(skb);
-		if (!desc->count)
-			break;
-		WRITE_ONCE(tp->copied_seq, seq);
+		break;
 	}
 	WRITE_ONCE(tp->copied_seq, seq);
 
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index aa9f2ec3dc46..0a1e90b80e36 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1795,8 +1795,7 @@ struct sk_buff *__skb_recv_udp(struct sock *sk, unsigned int flags,
 }
 EXPORT_SYMBOL(__skb_recv_udp);
 
-int udp_read_sock(struct sock *sk, read_descriptor_t *desc,
-		  sk_read_actor_t recv_actor)
+int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 {
 	int copied = 0;
 
@@ -1818,7 +1817,7 @@ int udp_read_sock(struct sock *sk, read_descriptor_t *desc,
 			continue;
 		}
 
-		used = recv_actor(desc, skb, 0, skb->len);
+		used = recv_actor(sk, skb);
 		if (used <= 0) {
 			if (!copied)
 				copied = used;
@@ -1829,13 +1828,12 @@ int udp_read_sock(struct sock *sk, read_descriptor_t *desc,
 		}
 
 		kfree_skb(skb);
-		if (!desc->count)
-			break;
+		break;
 	}
 
 	return copied;
 }
-EXPORT_SYMBOL(udp_read_sock);
+EXPORT_SYMBOL(udp_read_skb);
 
 /*
  * 	This should be easy, if there is something there we
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index 70564ddccc46..1aea5ef9bdea 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -701,6 +701,7 @@ const struct proto_ops inet6_stream_ops = {
 	.sendpage_locked   = tcp_sendpage_locked,
 	.splice_read	   = tcp_splice_read,
 	.read_sock	   = tcp_read_sock,
+	.read_skb	   = tcp_read_skb,
 	.peek_len	   = tcp_peek_len,
 #ifdef CONFIG_COMPAT
 	.compat_ioctl	   = inet6_compat_ioctl,
@@ -726,7 +727,7 @@ const struct proto_ops inet6_dgram_ops = {
 	.getsockopt	   = sock_common_getsockopt,	/* ok		*/
 	.sendmsg	   = inet6_sendmsg,		/* retpoline's sake */
 	.recvmsg	   = inet6_recvmsg,		/* retpoline's sake */
-	.read_sock	   = udp_read_sock,
+	.read_skb	   = udp_read_skb,
 	.mmap		   = sock_no_mmap,
 	.sendpage	   = sock_no_sendpage,
 	.set_peek_off	   = sk_set_peek_off,
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 654dcef7cfb3..3a96008ec331 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -741,10 +741,8 @@ static ssize_t unix_stream_splice_read(struct socket *,  loff_t *ppos,
 				       unsigned int flags);
 static int unix_dgram_sendmsg(struct socket *, struct msghdr *, size_t);
 static int unix_dgram_recvmsg(struct socket *, struct msghdr *, size_t, int);
-static int unix_read_sock(struct sock *sk, read_descriptor_t *desc,
-			  sk_read_actor_t recv_actor);
-static int unix_stream_read_sock(struct sock *sk, read_descriptor_t *desc,
-				 sk_read_actor_t recv_actor);
+static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
+static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
 static int unix_dgram_connect(struct socket *, struct sockaddr *,
 			      int, int);
 static int unix_seqpacket_sendmsg(struct socket *, struct msghdr *, size_t);
@@ -798,7 +796,7 @@ static const struct proto_ops unix_stream_ops = {
 	.shutdown =	unix_shutdown,
 	.sendmsg =	unix_stream_sendmsg,
 	.recvmsg =	unix_stream_recvmsg,
-	.read_sock =	unix_stream_read_sock,
+	.read_skb =	unix_stream_read_skb,
 	.mmap =		sock_no_mmap,
 	.sendpage =	unix_stream_sendpage,
 	.splice_read =	unix_stream_splice_read,
@@ -823,7 +821,7 @@ static const struct proto_ops unix_dgram_ops = {
 	.listen =	sock_no_listen,
 	.shutdown =	unix_shutdown,
 	.sendmsg =	unix_dgram_sendmsg,
-	.read_sock =	unix_read_sock,
+	.read_skb =	unix_read_skb,
 	.recvmsg =	unix_dgram_recvmsg,
 	.mmap =		sock_no_mmap,
 	.sendpage =	sock_no_sendpage,
@@ -2487,8 +2485,7 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg, size_t si
 	return __unix_dgram_recvmsg(sk, msg, size, flags);
 }
 
-static int unix_read_sock(struct sock *sk, read_descriptor_t *desc,
-			  sk_read_actor_t recv_actor)
+static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 {
 	int copied = 0;
 
@@ -2503,7 +2500,7 @@ static int unix_read_sock(struct sock *sk, read_descriptor_t *desc,
 		if (!skb)
 			return err;
 
-		used = recv_actor(desc, skb, 0, skb->len);
+		used = recv_actor(sk, skb);
 		if (used <= 0) {
 			if (!copied)
 				copied = used;
@@ -2514,8 +2511,7 @@ static int unix_read_sock(struct sock *sk, read_descriptor_t *desc,
 		}
 
 		kfree_skb(skb);
-		if (!desc->count)
-			break;
+		break;
 	}
 
 	return copied;
@@ -2650,13 +2646,12 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
 }
 #endif
 
-static int unix_stream_read_sock(struct sock *sk, read_descriptor_t *desc,
-				 sk_read_actor_t recv_actor)
+static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 {
 	if (unlikely(sk->sk_state != TCP_ESTABLISHED))
 		return -ENOTCONN;
 
-	return unix_read_sock(sk, desc, recv_actor);
+	return unix_read_skb(sk, recv_actor);
 }
 
 static int unix_stream_read_generic(struct unix_stream_read_state *state,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch bpf-next v3 3/4] skmsg: get rid of skb_clone()
  2022-06-02  1:21 [Patch bpf-next v3 0/4] sockmap: some performance optimizations Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb() Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 2/4] net: introduce a new proto_ops ->read_skb() Cong Wang
@ 2022-06-02  1:21 ` Cong Wang
  2022-06-02  1:21 ` [Patch bpf-next v3 4/4] skmsg: get rid of unncessary memset() Cong Wang
  3 siblings, 0 replies; 10+ messages in thread
From: Cong Wang @ 2022-06-02  1:21 UTC (permalink / raw)
  To: netdev
  Cc: bpf, Cong Wang, Eric Dumazet, John Fastabend, Daniel Borkmann,
	Jakub Sitnicki

From: Cong Wang <cong.wang@bytedance.com>

With ->read_skb() now we have an entire skb dequeued from
receive queue, now we just need to grab an addtional refcnt
before passing its ownership to recv actors.

And we should not touch them any more, particularly for
skb->sk. Fortunately, skb->sk is already set for most of
the protocols except UDP where skb->sk has been stolen,
so we have to fix it up for UDP case.

Cc: Eric Dumazet <edumazet@google.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
---
 net/core/skmsg.c | 7 +------
 net/ipv4/udp.c   | 1 +
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index f7f63b7d990c..8b248d289c11 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1167,10 +1167,7 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
 	int ret = __SK_DROP;
 	int len = skb->len;
 
-	/* clone here so sk_eat_skb() in tcp_read_sock does not drop our data */
-	skb = skb_clone(skb, GFP_ATOMIC);
-	if (!skb)
-		return 0;
+	skb_get(skb);
 
 	rcu_read_lock();
 	psock = sk_psock(sk);
@@ -1183,12 +1180,10 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
 	if (!prog)
 		prog = READ_ONCE(psock->progs.skb_verdict);
 	if (likely(prog)) {
-		skb->sk = sk;
 		skb_dst_drop(skb);
 		skb_bpf_redirect_clear(skb);
 		ret = bpf_prog_run_pin_on_cpu(prog, skb);
 		ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
-		skb->sk = NULL;
 	}
 	if (sk_psock_verdict_apply(psock, skb, ret) < 0)
 		len = 0;
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 0a1e90b80e36..b09936ccf709 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1817,6 +1817,7 @@ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
 			continue;
 		}
 
+		WARN_ON(!skb_set_owner_sk_safe(skb, sk));
 		used = recv_actor(sk, skb);
 		if (used <= 0) {
 			if (!copied)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Patch bpf-next v3 4/4] skmsg: get rid of unncessary memset()
  2022-06-02  1:21 [Patch bpf-next v3 0/4] sockmap: some performance optimizations Cong Wang
                   ` (2 preceding siblings ...)
  2022-06-02  1:21 ` [Patch bpf-next v3 3/4] skmsg: get rid of skb_clone() Cong Wang
@ 2022-06-02  1:21 ` Cong Wang
  3 siblings, 0 replies; 10+ messages in thread
From: Cong Wang @ 2022-06-02  1:21 UTC (permalink / raw)
  To: netdev; +Cc: bpf, Cong Wang, John Fastabend, Daniel Borkmann, Jakub Sitnicki

From: Cong Wang <cong.wang@bytedance.com>

We always allocate skmsg with kzalloc(), so there is no need
to call memset(0) on it, the only thing we need from
sk_msg_init() is sg_init_marker(). So introduce a new helper
which is just kzalloc()+sg_init_marker(), this saves an
unncessary memset(0) for skmsg on fast path.

Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
---
 net/core/skmsg.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 8b248d289c11..4b297d67edb7 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -497,23 +497,27 @@ bool sk_msg_is_readable(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(sk_msg_is_readable);
 
-static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk,
-						  struct sk_buff *skb)
+static struct sk_msg *alloc_sk_msg(void)
 {
 	struct sk_msg *msg;
 
-	if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)
+	msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_KERNEL);
+	if (unlikely(!msg))
 		return NULL;
+	sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS);
+	return msg;
+}
 
-	if (!sk_rmem_schedule(sk, skb, skb->truesize))
+static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk,
+						  struct sk_buff *skb)
+{
+	if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)
 		return NULL;
 
-	msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_KERNEL);
-	if (unlikely(!msg))
+	if (!sk_rmem_schedule(sk, skb, skb->truesize))
 		return NULL;
 
-	sk_msg_init(msg);
-	return msg;
+	return alloc_sk_msg();
 }
 
 static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
@@ -590,13 +594,12 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
 static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
 				     u32 off, u32 len)
 {
-	struct sk_msg *msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_ATOMIC);
+	struct sk_msg *msg = alloc_sk_msg();
 	struct sock *sk = psock->sk;
 	int err;
 
 	if (unlikely(!msg))
 		return -EAGAIN;
-	sk_msg_init(msg);
 	skb_set_owner_r(skb, sk);
 	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
 	if (err < 0)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb()
  2022-06-02  1:21 ` [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb() Cong Wang
@ 2022-06-09 15:08   ` John Fastabend
  2022-06-09 18:50     ` Cong Wang
  0 siblings, 1 reply; 10+ messages in thread
From: John Fastabend @ 2022-06-09 15:08 UTC (permalink / raw)
  To: Cong Wang, netdev
  Cc: bpf, Cong Wang, Eric Dumazet, John Fastabend, Daniel Borkmann,
	Jakub Sitnicki

Cong Wang wrote:
> From: Cong Wang <cong.wang@bytedance.com>
> 
> This patch inroduces tcp_read_skb() based on tcp_read_sock(),
> a preparation for the next patch which actually introduces
> a new sock ops.
> 
> TCP is special here, because it has tcp_read_sock() which is
> mainly used by splice(). tcp_read_sock() supports partial read
> and arbitrary offset, neither of them is needed for sockmap.
> 
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: John Fastabend <john.fastabend@gmail.com>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Jakub Sitnicki <jakub@cloudflare.com>
> Signed-off-by: Cong Wang <cong.wang@bytedance.com>
> ---
>  include/net/tcp.h |  2 ++
>  net/ipv4/tcp.c    | 47 +++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 49 insertions(+)
> 
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index 1e99f5c61f84..878544d0f8f9 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -669,6 +669,8 @@ void tcp_get_info(struct sock *, struct tcp_info *);
>  /* Read 'sendfile()'-style from a TCP socket */
>  int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
>  		  sk_read_actor_t recv_actor);
> +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> +		 sk_read_actor_t recv_actor);
>  
>  void tcp_initialize_rcv_mss(struct sock *sk);
>  
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 9984d23a7f3e..a18e9ababf54 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1709,6 +1709,53 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
>  }
>  EXPORT_SYMBOL(tcp_read_sock);
>  
> +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> +		 sk_read_actor_t recv_actor)
> +{
> +	struct tcp_sock *tp = tcp_sk(sk);
> +	u32 seq = tp->copied_seq;
> +	struct sk_buff *skb;
> +	int copied = 0;
> +	u32 offset;
> +
> +	if (sk->sk_state == TCP_LISTEN)
> +		return -ENOTCONN;
> +
> +	while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
> +		int used;
> +
> +		__skb_unlink(skb, &sk->sk_receive_queue);
> +		used = recv_actor(desc, skb, 0, skb->len);
> +		if (used <= 0) {
> +			if (!copied)
> +				copied = used;
> +			break;
> +		}
> +		seq += used;
> +		copied += used;
> +
> +		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
> +			kfree_skb(skb);

Hi Cong, can you elaborate here from v2 comment.

"Hm, it is tricky here, we use the skb refcount after this patchset, so
it could be a real drop from another kfree_skb() in net/core/skmsg.c
which initiates the drop."

The tcp_read_sock() hook is using tcp_eat_recv_skb(). Are we going
to kick tracing infra even on good cases with kfree_skb()? In
sk_psock_verdict_recv() we do an skb_clone() there.

.John

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb()
  2022-06-09 15:08   ` John Fastabend
@ 2022-06-09 18:50     ` Cong Wang
  2022-06-09 19:12       ` John Fastabend
  0 siblings, 1 reply; 10+ messages in thread
From: Cong Wang @ 2022-06-09 18:50 UTC (permalink / raw)
  To: John Fastabend
  Cc: Linux Kernel Network Developers, bpf, Cong Wang, Eric Dumazet,
	Daniel Borkmann, Jakub Sitnicki

On Thu, Jun 9, 2022 at 8:08 AM John Fastabend <john.fastabend@gmail.com> wrote:
>
> Cong Wang wrote:
> > From: Cong Wang <cong.wang@bytedance.com>
> >
> > This patch inroduces tcp_read_skb() based on tcp_read_sock(),
> > a preparation for the next patch which actually introduces
> > a new sock ops.
> >
> > TCP is special here, because it has tcp_read_sock() which is
> > mainly used by splice(). tcp_read_sock() supports partial read
> > and arbitrary offset, neither of them is needed for sockmap.
> >
> > Cc: Eric Dumazet <edumazet@google.com>
> > Cc: John Fastabend <john.fastabend@gmail.com>
> > Cc: Daniel Borkmann <daniel@iogearbox.net>
> > Cc: Jakub Sitnicki <jakub@cloudflare.com>
> > Signed-off-by: Cong Wang <cong.wang@bytedance.com>
> > ---
> >  include/net/tcp.h |  2 ++
> >  net/ipv4/tcp.c    | 47 +++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 49 insertions(+)
> >
> > diff --git a/include/net/tcp.h b/include/net/tcp.h
> > index 1e99f5c61f84..878544d0f8f9 100644
> > --- a/include/net/tcp.h
> > +++ b/include/net/tcp.h
> > @@ -669,6 +669,8 @@ void tcp_get_info(struct sock *, struct tcp_info *);
> >  /* Read 'sendfile()'-style from a TCP socket */
> >  int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
> >                 sk_read_actor_t recv_actor);
> > +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> > +              sk_read_actor_t recv_actor);
> >
> >  void tcp_initialize_rcv_mss(struct sock *sk);
> >
> > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> > index 9984d23a7f3e..a18e9ababf54 100644
> > --- a/net/ipv4/tcp.c
> > +++ b/net/ipv4/tcp.c
> > @@ -1709,6 +1709,53 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
> >  }
> >  EXPORT_SYMBOL(tcp_read_sock);
> >
> > +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> > +              sk_read_actor_t recv_actor)
> > +{
> > +     struct tcp_sock *tp = tcp_sk(sk);
> > +     u32 seq = tp->copied_seq;
> > +     struct sk_buff *skb;
> > +     int copied = 0;
> > +     u32 offset;
> > +
> > +     if (sk->sk_state == TCP_LISTEN)
> > +             return -ENOTCONN;
> > +
> > +     while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
> > +             int used;
> > +
> > +             __skb_unlink(skb, &sk->sk_receive_queue);
> > +             used = recv_actor(desc, skb, 0, skb->len);
> > +             if (used <= 0) {
> > +                     if (!copied)
> > +                             copied = used;
> > +                     break;
> > +             }
> > +             seq += used;
> > +             copied += used;
> > +
> > +             if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
> > +                     kfree_skb(skb);
>
> Hi Cong, can you elaborate here from v2 comment.
>
> "Hm, it is tricky here, we use the skb refcount after this patchset, so
> it could be a real drop from another kfree_skb() in net/core/skmsg.c
> which initiates the drop."

Sure.

This is the source code of consume_skb():

 911 void consume_skb(struct sk_buff *skb)
 912 {
 913         if (!skb_unref(skb))
 914                 return;
 915
 916         trace_consume_skb(skb);
 917         __kfree_skb(skb);
 918 }

and this is kfree_skb (or kfree_skb_reason()):

 770 void kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
 771 {
 772         if (!skb_unref(skb))
 773                 return;
 774
 775         DEBUG_NET_WARN_ON_ONCE(reason <= 0 || reason >=
SKB_DROP_REASON_MAX);
 776
 777         trace_kfree_skb(skb, __builtin_return_address(0), reason);
 778         __kfree_skb(skb);
 779 }

So, both do refcnt before tracing, very clearly.

Now, let's do a simple case:

tcp_read_skb():
 -> tcp_recv_skb() // Let's assume skb refcnt == 1 here
  -> recv_actor()
   -> skb_get() // refcnt == 2
   -> kfree_skb() // Let's assume users drop it intentionally
 ->kfree_skb() // refcnt == 0 here, if we had consume_skb() it would
not be counted as a drop

Of course you can give another example where consume_skb() is
correct, but the point here is it is very tricky when refcnt, I even doubt
we can do anything here, maybe moving trace before refcnt.

>
> The tcp_read_sock() hook is using tcp_eat_recv_skb(). Are we going
> to kick tracing infra even on good cases with kfree_skb()? In
> sk_psock_verdict_recv() we do an skb_clone() there.

I don't get your point here, are you suggesting we should sacrifice
performance just to make the drop tracing more accurate??

Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb()
  2022-06-09 18:50     ` Cong Wang
@ 2022-06-09 19:12       ` John Fastabend
  2022-06-14 17:19         ` Cong Wang
  0 siblings, 1 reply; 10+ messages in thread
From: John Fastabend @ 2022-06-09 19:12 UTC (permalink / raw)
  To: Cong Wang, John Fastabend
  Cc: Linux Kernel Network Developers, bpf, Cong Wang, Eric Dumazet,
	Daniel Borkmann, Jakub Sitnicki

Cong Wang wrote:
> On Thu, Jun 9, 2022 at 8:08 AM John Fastabend <john.fastabend@gmail.com> wrote:
> >
> > Cong Wang wrote:
> > > From: Cong Wang <cong.wang@bytedance.com>
> > >
> > > This patch inroduces tcp_read_skb() based on tcp_read_sock(),
> > > a preparation for the next patch which actually introduces
> > > a new sock ops.
> > >
> > > TCP is special here, because it has tcp_read_sock() which is
> > > mainly used by splice(). tcp_read_sock() supports partial read
> > > and arbitrary offset, neither of them is needed for sockmap.
> > >
> > > Cc: Eric Dumazet <edumazet@google.com>
> > > Cc: John Fastabend <john.fastabend@gmail.com>
> > > Cc: Daniel Borkmann <daniel@iogearbox.net>
> > > Cc: Jakub Sitnicki <jakub@cloudflare.com>
> > > Signed-off-by: Cong Wang <cong.wang@bytedance.com>
> > > ---
> > >  include/net/tcp.h |  2 ++
> > >  net/ipv4/tcp.c    | 47 +++++++++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 49 insertions(+)
> > >
> > > diff --git a/include/net/tcp.h b/include/net/tcp.h
> > > index 1e99f5c61f84..878544d0f8f9 100644
> > > --- a/include/net/tcp.h
> > > +++ b/include/net/tcp.h
> > > @@ -669,6 +669,8 @@ void tcp_get_info(struct sock *, struct tcp_info *);
> > >  /* Read 'sendfile()'-style from a TCP socket */
> > >  int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
> > >                 sk_read_actor_t recv_actor);
> > > +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> > > +              sk_read_actor_t recv_actor);
> > >
> > >  void tcp_initialize_rcv_mss(struct sock *sk);
> > >
> > > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> > > index 9984d23a7f3e..a18e9ababf54 100644
> > > --- a/net/ipv4/tcp.c
> > > +++ b/net/ipv4/tcp.c
> > > @@ -1709,6 +1709,53 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
> > >  }
> > >  EXPORT_SYMBOL(tcp_read_sock);
> > >
> > > +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> > > +              sk_read_actor_t recv_actor)
> > > +{
> > > +     struct tcp_sock *tp = tcp_sk(sk);
> > > +     u32 seq = tp->copied_seq;
> > > +     struct sk_buff *skb;
> > > +     int copied = 0;
> > > +     u32 offset;
> > > +
> > > +     if (sk->sk_state == TCP_LISTEN)
> > > +             return -ENOTCONN;
> > > +
> > > +     while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
> > > +             int used;
> > > +
> > > +             __skb_unlink(skb, &sk->sk_receive_queue);
> > > +             used = recv_actor(desc, skb, 0, skb->len);
> > > +             if (used <= 0) {
> > > +                     if (!copied)
> > > +                             copied = used;
> > > +                     break;
> > > +             }
> > > +             seq += used;
> > > +             copied += used;
> > > +
> > > +             if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
> > > +                     kfree_skb(skb);
> >
> > Hi Cong, can you elaborate here from v2 comment.
> >
> > "Hm, it is tricky here, we use the skb refcount after this patchset, so
> > it could be a real drop from another kfree_skb() in net/core/skmsg.c
> > which initiates the drop."
> 
> Sure.
> 
> This is the source code of consume_skb():
> 
>  911 void consume_skb(struct sk_buff *skb)
>  912 {
>  913         if (!skb_unref(skb))
>  914                 return;
>  915
>  916         trace_consume_skb(skb);
>  917         __kfree_skb(skb);
>  918 }
> 
> and this is kfree_skb (or kfree_skb_reason()):
> 
>  770 void kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason)
>  771 {
>  772         if (!skb_unref(skb))
>  773                 return;
>  774
>  775         DEBUG_NET_WARN_ON_ONCE(reason <= 0 || reason >=
> SKB_DROP_REASON_MAX);
>  776
>  777         trace_kfree_skb(skb, __builtin_return_address(0), reason);
>  778         __kfree_skb(skb);
>  779 }
> 
> So, both do refcnt before tracing, very clearly.
> 
> Now, let's do a simple case:
> 
> tcp_read_skb():
>  -> tcp_recv_skb() // Let's assume skb refcnt == 1 here
>   -> recv_actor()
>    -> skb_get() // refcnt == 2
>    -> kfree_skb() // Let's assume users drop it intentionally
>  ->kfree_skb() // refcnt == 0 here, if we had consume_skb() it would
> not be counted as a drop

OK great thanks for that it matches what I was thinking as well.

> 
> Of course you can give another example where consume_skb() is
> correct, but the point here is it is very tricky when refcnt, I even doubt
> we can do anything here, maybe moving trace before refcnt.

Considering, the other case where we do kfree_skb when consume_skb()
is correct. We have logic in the Cilium tracing tools (tetragon) to
trace kfree_skb's and count them. So in the good case here
we end up tripping that logic even though its expected.

The question is which is better noisy kfree_skb even when
expected or missing kfree_skb on the drops. I'm leaning
to consume_skb() is safer instead of noisy kfree_skb().

> 
> >
> > The tcp_read_sock() hook is using tcp_eat_recv_skb(). Are we going
> > to kick tracing infra even on good cases with kfree_skb()? In
> > sk_psock_verdict_recv() we do an skb_clone() there.
> 
> I don't get your point here, are you suggesting we should sacrifice
> performance just to make the drop tracing more accurate??

No lets not sacrifice the performance. I'm suggesting I would
rather go with skb_consume() and miss some kfree_skb() than
the other way around and have extra kfree_skb() that will
trip monitoring. Does the question make sense? I guess we
have to pick one.

> 
> Thanks.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb()
  2022-06-09 19:12       ` John Fastabend
@ 2022-06-14 17:19         ` Cong Wang
  2022-06-14 19:55           ` John Fastabend
  0 siblings, 1 reply; 10+ messages in thread
From: Cong Wang @ 2022-06-14 17:19 UTC (permalink / raw)
  To: John Fastabend
  Cc: Linux Kernel Network Developers, bpf, Cong Wang, Eric Dumazet,
	Daniel Borkmann, Jakub Sitnicki

On Thu, Jun 9, 2022 at 12:12 PM John Fastabend <john.fastabend@gmail.com> wrote:
> Considering, the other case where we do kfree_skb when consume_skb()
> is correct. We have logic in the Cilium tracing tools (tetragon) to
> trace kfree_skb's and count them. So in the good case here
> we end up tripping that logic even though its expected.
>
> The question is which is better noisy kfree_skb even when
> expected or missing kfree_skb on the drops. I'm leaning
> to consume_skb() is safer instead of noisy kfree_skb().

Oh, sure. As long as we all know neither of them is accurate,
I am 100% fine with changing it to consume_skb() to reduce the noise
for you.

Meanwhile, let me think about how to make it accurate, if possible at
all. But clearly this deserves a separate patch.

Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb()
  2022-06-14 17:19         ` Cong Wang
@ 2022-06-14 19:55           ` John Fastabend
  0 siblings, 0 replies; 10+ messages in thread
From: John Fastabend @ 2022-06-14 19:55 UTC (permalink / raw)
  To: Cong Wang, John Fastabend
  Cc: Linux Kernel Network Developers, bpf, Cong Wang, Eric Dumazet,
	Daniel Borkmann, Jakub Sitnicki

Cong Wang wrote:
> On Thu, Jun 9, 2022 at 12:12 PM John Fastabend <john.fastabend@gmail.com> wrote:
> > Considering, the other case where we do kfree_skb when consume_skb()
> > is correct. We have logic in the Cilium tracing tools (tetragon) to
> > trace kfree_skb's and count them. So in the good case here
> > we end up tripping that logic even though its expected.
> >
> > The question is which is better noisy kfree_skb even when
> > expected or missing kfree_skb on the drops. I'm leaning
> > to consume_skb() is safer instead of noisy kfree_skb().
> 
> Oh, sure. As long as we all know neither of them is accurate,
> I am 100% fine with changing it to consume_skb() to reduce the noise
> for you.

Thanks that would be great.

> 
> Meanwhile, let me think about how to make it accurate, if possible at
> all. But clearly this deserves a separate patch.

Yep should be ok. We set the error code in desc->error in the verdict
recv handler maybe tracking through this.

> 
> Thanks.



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-06-14 19:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-02  1:21 [Patch bpf-next v3 0/4] sockmap: some performance optimizations Cong Wang
2022-06-02  1:21 ` [Patch bpf-next v3 1/4] tcp: introduce tcp_read_skb() Cong Wang
2022-06-09 15:08   ` John Fastabend
2022-06-09 18:50     ` Cong Wang
2022-06-09 19:12       ` John Fastabend
2022-06-14 17:19         ` Cong Wang
2022-06-14 19:55           ` John Fastabend
2022-06-02  1:21 ` [Patch bpf-next v3 2/4] net: introduce a new proto_ops ->read_skb() Cong Wang
2022-06-02  1:21 ` [Patch bpf-next v3 3/4] skmsg: get rid of skb_clone() Cong Wang
2022-06-02  1:21 ` [Patch bpf-next v3 4/4] skmsg: get rid of unncessary memset() Cong Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.