All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/2] bpf: misc performance improvements for cgroup hooks
@ 2020-12-17 17:23 Stanislav Fomichev
  2020-12-17 17:23 ` [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt Stanislav Fomichev
  2020-12-17 17:23 ` [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type Stanislav Fomichev
  0 siblings, 2 replies; 16+ messages in thread
From: Stanislav Fomichev @ 2020-12-17 17:23 UTC (permalink / raw)
  To: netdev, bpf; +Cc: ast, daniel, Stanislav Fomichev

First patch tries to remove kzalloc/kfree from getsockopt for the
common cases.

Second patch switches cgroup_bpf_enabled to be per-attach to
to add only overhead for the cgroup attach types used on the system.

No visible user-side changes.

Stanislav Fomichev (2):
  bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  bpf: split cgroup_bpf_enabled per attach type

 include/linux/bpf-cgroup.h | 36 +++++++++++++------------
 include/linux/filter.h     |  3 +++
 kernel/bpf/cgroup.c        | 55 +++++++++++++++++++++++++++++++-------
 net/ipv4/af_inet.c         |  9 ++++---
 net/ipv4/udp.c             |  7 +++--
 net/ipv6/af_inet6.c        |  9 ++++---
 net/ipv6/udp.c             |  7 +++--
 7 files changed, 83 insertions(+), 43 deletions(-)

-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-17 17:23 [PATCH bpf-next 0/2] bpf: misc performance improvements for cgroup hooks Stanislav Fomichev
@ 2020-12-17 17:23 ` Stanislav Fomichev
  2020-12-21 22:22   ` Song Liu
                     ` (2 more replies)
  2020-12-17 17:23 ` [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type Stanislav Fomichev
  1 sibling, 3 replies; 16+ messages in thread
From: Stanislav Fomichev @ 2020-12-17 17:23 UTC (permalink / raw)
  To: netdev, bpf; +Cc: ast, daniel, Stanislav Fomichev

When we attach a bpf program to cgroup/getsockopt any other getsockopt()
syscall starts incurring kzalloc/kfree cost. While, in general, it's
not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
fastpath for incoming TCP, we don't want to have extra allocations in
there.

Let add a small buffer on the stack and use it for small (majority)
{s,g}etsockopt values. I've started with 128 bytes to cover
the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
currently, with some planned extension to 64 + some headroom
for the future).

It seems natural to do the same for setsockopt, but it's a bit more
involved when the BPF program modifies the data (where we have to
kmalloc). The assumption is that for the majority of setsockopt
calls (which are doing pure BPF options or apply policy) this
will bring some benefit as well.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 include/linux/filter.h |  3 +++
 kernel/bpf/cgroup.c    | 41 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 29c27656165b..362eb0d7af5d 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1281,6 +1281,8 @@ struct bpf_sysctl_kern {
 	u64 tmp_reg;
 };
 
+#define BPF_SOCKOPT_KERN_BUF_SIZE	128
+
 struct bpf_sockopt_kern {
 	struct sock	*sk;
 	u8		*optval;
@@ -1289,6 +1291,7 @@ struct bpf_sockopt_kern {
 	s32		optname;
 	s32		optlen;
 	s32		retval;
+	u8		buf[BPF_SOCKOPT_KERN_BUF_SIZE];
 };
 
 int copy_bpf_fprog_from_user(struct sock_fprog *dst, sockptr_t src, int len);
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 6ec088a96302..0cb5d4376844 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -1310,6 +1310,15 @@ static int sockopt_alloc_buf(struct bpf_sockopt_kern *ctx, int max_optlen)
 		max_optlen = PAGE_SIZE;
 	}
 
+	if (max_optlen <= sizeof(ctx->buf)) {
+		/* When the optval fits into BPF_SOCKOPT_KERN_BUF_SIZE
+		 * bytes avoid the cost of kzalloc.
+		 */
+		ctx->optval = ctx->buf;
+		ctx->optval_end = ctx->optval + max_optlen;
+		return max_optlen;
+	}
+
 	ctx->optval = kzalloc(max_optlen, GFP_USER);
 	if (!ctx->optval)
 		return -ENOMEM;
@@ -1321,9 +1330,31 @@ static int sockopt_alloc_buf(struct bpf_sockopt_kern *ctx, int max_optlen)
 
 static void sockopt_free_buf(struct bpf_sockopt_kern *ctx)
 {
+	if (ctx->optval == ctx->buf)
+		return;
 	kfree(ctx->optval);
 }
 
+static void *sockopt_export_buf(struct bpf_sockopt_kern *ctx)
+{
+	void *p;
+
+	if (ctx->optval != ctx->buf)
+		return ctx->optval;
+
+	/* We've used bpf_sockopt_kern->buf as an intermediary storage,
+	 * but the BPF program indicates that we need to pass this
+	 * data to the kernel setsockopt handler. No way to export
+	 * on-stack buf, have to allocate a new buffer. The caller
+	 * is responsible for the kfree().
+	 */
+	p = kzalloc(ctx->optlen, GFP_USER);
+	if (!p)
+		return ERR_PTR(-ENOMEM);
+	memcpy(p, ctx->optval, ctx->optlen);
+	return p;
+}
+
 int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
 				       int *optname, char __user *optval,
 				       int *optlen, char **kernel_optval)
@@ -1389,8 +1420,14 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
 		 * use original userspace data.
 		 */
 		if (ctx.optlen != 0) {
-			*optlen = ctx.optlen;
-			*kernel_optval = ctx.optval;
+			void *buf = sockopt_export_buf(&ctx);
+
+			if (!IS_ERR(buf)) {
+				*optlen = ctx.optlen;
+				*kernel_optval = buf;
+			} else {
+				ret = PTR_ERR(buf);
+			}
 		}
 	}
 
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type
  2020-12-17 17:23 [PATCH bpf-next 0/2] bpf: misc performance improvements for cgroup hooks Stanislav Fomichev
  2020-12-17 17:23 ` [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt Stanislav Fomichev
@ 2020-12-17 17:23 ` Stanislav Fomichev
  2020-12-21 22:40   ` Song Liu
  1 sibling, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2020-12-17 17:23 UTC (permalink / raw)
  To: netdev, bpf; +Cc: ast, daniel, Stanislav Fomichev

When we attach any cgroup hook, the rest (even if unused/unattached) start
to contribute small overhead. In particular, the one we want to avoid is
__cgroup_bpf_run_filter_skb which does two redirections to get to
the cgroup and pushes/pulls skb.

Let's split cgroup_bpf_enabled to be per-attach to make sure
only used attach types trigger.

I've dropped some existing high-level cgroup_bpf_enabled in some
places because BPF_PROG_CGROUP_XXX_RUN macros usually have another
cgroup_bpf_enabled check.

I also had to copy-paste BPF_CGROUP_RUN_SA_PROG_LOCK for
GETPEERNAME/GETSOCKNAME because type for cgroup_bpf_enabled[type]
has to be constant and known at compile time.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 include/linux/bpf-cgroup.h | 36 +++++++++++++++++++-----------------
 kernel/bpf/cgroup.c        | 14 ++++++--------
 net/ipv4/af_inet.c         |  9 +++++----
 net/ipv4/udp.c             |  7 +++----
 net/ipv6/af_inet6.c        |  9 +++++----
 net/ipv6/udp.c             |  7 +++----
 6 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 72e69a0e1e8c..05877980e4e6 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -23,8 +23,8 @@ struct ctl_table_header;
 
 #ifdef CONFIG_CGROUP_BPF
 
-extern struct static_key_false cgroup_bpf_enabled_key;
-#define cgroup_bpf_enabled static_branch_unlikely(&cgroup_bpf_enabled_key)
+extern struct static_key_false cgroup_bpf_enabled_key[MAX_BPF_ATTACH_TYPE];
+#define cgroup_bpf_enabled(type) static_branch_unlikely(&cgroup_bpf_enabled_key[type])
 
 DECLARE_PER_CPU(struct bpf_cgroup_storage*,
 		bpf_cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE]);
@@ -185,7 +185,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb)			      \
 ({									      \
 	int __ret = 0;							      \
-	if (cgroup_bpf_enabled)						      \
+	if (cgroup_bpf_enabled(BPF_CGROUP_INET_INGRESS))		      \
 		__ret = __cgroup_bpf_run_filter_skb(sk, skb,		      \
 						    BPF_CGROUP_INET_INGRESS); \
 									      \
@@ -195,7 +195,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk, skb)			       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled && sk && sk == skb->sk) {		       \
+	if (cgroup_bpf_enabled(BPF_CGROUP_INET_EGRESS) && sk && sk == skb->sk) { \
 		typeof(sk) __sk = sk_to_full_sk(sk);			       \
 		if (sk_fullsock(__sk))					       \
 			__ret = __cgroup_bpf_run_filter_skb(__sk, skb,	       \
@@ -207,7 +207,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_SK_PROG(sk, type)				       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled) {					       \
+	if (cgroup_bpf_enabled(type)) {					       \
 		__ret = __cgroup_bpf_run_filter_sk(sk, type);		       \
 	}								       \
 	__ret;								       \
@@ -228,7 +228,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, type)				       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled)						       \
+	if (cgroup_bpf_enabled(type))					       \
 		__ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, type,     \
 							  NULL);	       \
 	__ret;								       \
@@ -237,7 +237,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, type, t_ctx)		       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled)	{					       \
+	if (cgroup_bpf_enabled(type))	{				       \
 		lock_sock(sk);						       \
 		__ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, type,     \
 							  t_ctx);	       \
@@ -252,8 +252,10 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_INET6_BIND_LOCK(sk, uaddr)			       \
 	BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_INET6_BIND, NULL)
 
-#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (cgroup_bpf_enabled && \
-					    sk->sk_prot->pre_connect)
+#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk)				       \
+	((cgroup_bpf_enabled(BPF_CGROUP_INET4_CONNECT) ||		       \
+	  cgroup_bpf_enabled(BPF_CGROUP_INET6_CONNECT)) &&		       \
+	 sk->sk_prot->pre_connect)
 
 #define BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr)			       \
 	BPF_CGROUP_RUN_SA_PROG(sk, uaddr, BPF_CGROUP_INET4_CONNECT)
@@ -297,7 +299,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_SOCK_OPS_SK(sock_ops, sk)			\
 ({									\
 	int __ret = 0;							\
-	if (cgroup_bpf_enabled)						\
+	if (cgroup_bpf_enabled(BPF_CGROUP_SOCK_OPS))			\
 		__ret = __cgroup_bpf_run_filter_sock_ops(sk,		\
 							 sock_ops,	\
 							 BPF_CGROUP_SOCK_OPS); \
@@ -307,7 +309,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops)				       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled && (sock_ops)->sk) {	       \
+	if (cgroup_bpf_enabled(BPF_CGROUP_SOCK_OPS) && (sock_ops)->sk) {       \
 		typeof(sk) __sk = sk_to_full_sk((sock_ops)->sk);	       \
 		if (__sk && sk_fullsock(__sk))				       \
 			__ret = __cgroup_bpf_run_filter_sock_ops(__sk,	       \
@@ -320,7 +322,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(type, major, minor, access)	      \
 ({									      \
 	int __ret = 0;							      \
-	if (cgroup_bpf_enabled)						      \
+	if (cgroup_bpf_enabled(BPF_CGROUP_DEVICE))			      \
 		__ret = __cgroup_bpf_check_dev_permission(type, major, minor, \
 							  access,	      \
 							  BPF_CGROUP_DEVICE); \
@@ -332,7 +334,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_RUN_PROG_SYSCTL(head, table, write, buf, count, pos)  \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled)						       \
+	if (cgroup_bpf_enabled(BPF_CGROUP_SYSCTL))			       \
 		__ret = __cgroup_bpf_run_filter_sysctl(head, table, write,     \
 						       buf, count, pos,        \
 						       BPF_CGROUP_SYSCTL);     \
@@ -343,7 +345,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 				       kernel_optval)			       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled)						       \
+	if (cgroup_bpf_enabled(BPF_CGROUP_SETSOCKOPT))			       \
 		__ret = __cgroup_bpf_run_filter_setsockopt(sock, level,	       \
 							   optname, optval,    \
 							   optlen,	       \
@@ -354,7 +356,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 #define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen)			       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled)						       \
+	if (cgroup_bpf_enabled(BPF_CGROUP_GETSOCKOPT))			       \
 		get_user(__ret, optlen);				       \
 	__ret;								       \
 })
@@ -363,7 +365,7 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
 				       max_optlen, retval)		       \
 ({									       \
 	int __ret = retval;						       \
-	if (cgroup_bpf_enabled)						       \
+	if (cgroup_bpf_enabled(BPF_CGROUP_GETSOCKOPT))			       \
 		__ret = __cgroup_bpf_run_filter_getsockopt(sock, level,	       \
 							   optname, optval,    \
 							   optlen, max_optlen, \
@@ -427,7 +429,7 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
 	return 0;
 }
 
-#define cgroup_bpf_enabled (0)
+#define cgroup_bpf_enabled(type) (0)
 #define BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, type, t_ctx) ({ 0; })
 #define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
 #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 0cb5d4376844..7986d9ef85f1 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -19,7 +19,7 @@
 
 #include "../cgroup/cgroup-internal.h"
 
-DEFINE_STATIC_KEY_FALSE(cgroup_bpf_enabled_key);
+DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_BPF_ATTACH_TYPE);
 EXPORT_SYMBOL(cgroup_bpf_enabled_key);
 
 void cgroup_bpf_offline(struct cgroup *cgrp)
@@ -128,7 +128,7 @@ static void cgroup_bpf_release(struct work_struct *work)
 			if (pl->link)
 				bpf_cgroup_link_auto_detach(pl->link);
 			kfree(pl);
-			static_branch_dec(&cgroup_bpf_enabled_key);
+			static_branch_dec(&cgroup_bpf_enabled_key[type]);
 		}
 		old_array = rcu_dereference_protected(
 				cgrp->bpf.effective[type],
@@ -499,7 +499,7 @@ int __cgroup_bpf_attach(struct cgroup *cgrp,
 	if (old_prog)
 		bpf_prog_put(old_prog);
 	else
-		static_branch_inc(&cgroup_bpf_enabled_key);
+		static_branch_inc(&cgroup_bpf_enabled_key[type]);
 	bpf_cgroup_storages_link(new_storage, cgrp, type);
 	return 0;
 
@@ -698,7 +698,7 @@ int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog,
 		cgrp->bpf.flags[type] = 0;
 	if (old_prog)
 		bpf_prog_put(old_prog);
-	static_branch_dec(&cgroup_bpf_enabled_key);
+	static_branch_dec(&cgroup_bpf_enabled_key[type]);
 	return 0;
 
 cleanup:
@@ -1371,8 +1371,7 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
 	 * attached to the hook so we don't waste time allocating
 	 * memory and locking the socket.
 	 */
-	if (!cgroup_bpf_enabled ||
-	    __cgroup_bpf_prog_array_is_empty(cgrp, BPF_CGROUP_SETSOCKOPT))
+	if (__cgroup_bpf_prog_array_is_empty(cgrp, BPF_CGROUP_SETSOCKOPT))
 		return 0;
 
 	/* Allocate a bit more than the initial user buffer for
@@ -1455,8 +1454,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
 	 * attached to the hook so we don't waste time allocating
 	 * memory and locking the socket.
 	 */
-	if (!cgroup_bpf_enabled ||
-	    __cgroup_bpf_prog_array_is_empty(cgrp, BPF_CGROUP_GETSOCKOPT))
+	if (__cgroup_bpf_prog_array_is_empty(cgrp, BPF_CGROUP_GETSOCKOPT))
 		return retval;
 
 	ctx.optlen = max_optlen;
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index b94fa8eb831b..6ba2930ff49b 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -777,18 +777,19 @@ int inet_getname(struct socket *sock, struct sockaddr *uaddr,
 			return -ENOTCONN;
 		sin->sin_port = inet->inet_dport;
 		sin->sin_addr.s_addr = inet->inet_daddr;
+		BPF_CGROUP_RUN_SA_PROG_LOCK(sk, (struct sockaddr *)sin,
+					    BPF_CGROUP_INET4_GETPEERNAME,
+					    NULL);
 	} else {
 		__be32 addr = inet->inet_rcv_saddr;
 		if (!addr)
 			addr = inet->inet_saddr;
 		sin->sin_port = inet->inet_sport;
 		sin->sin_addr.s_addr = addr;
-	}
-	if (cgroup_bpf_enabled)
 		BPF_CGROUP_RUN_SA_PROG_LOCK(sk, (struct sockaddr *)sin,
-					    peer ? BPF_CGROUP_INET4_GETPEERNAME :
-						   BPF_CGROUP_INET4_GETSOCKNAME,
+					    BPF_CGROUP_INET4_GETSOCKNAME,
 					    NULL);
+	}
 	memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
 	return sizeof(*sin);
 }
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index dece195f212c..fc3c2e75e400 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1124,7 +1124,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
 		rcu_read_unlock();
 	}
 
-	if (cgroup_bpf_enabled && !connected) {
+	if (cgroup_bpf_enabled(BPF_CGROUP_UDP4_SENDMSG) && !connected) {
 		err = BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk,
 					    (struct sockaddr *)usin, &ipc.addr);
 		if (err)
@@ -1858,9 +1858,8 @@ int udp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock,
 		memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
 		*addr_len = sizeof(*sin);
 
-		if (cgroup_bpf_enabled)
-			BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk,
-							(struct sockaddr *)sin);
+		BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk,
+						      (struct sockaddr *)sin);
 	}
 
 	if (udp_sk(sk)->gro_enabled)
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index a7e3d170af51..fc985658dc91 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -527,18 +527,19 @@ int inet6_getname(struct socket *sock, struct sockaddr *uaddr,
 		sin->sin6_addr = sk->sk_v6_daddr;
 		if (np->sndflow)
 			sin->sin6_flowinfo = np->flow_label;
+		BPF_CGROUP_RUN_SA_PROG_LOCK(sk, (struct sockaddr *)sin,
+					    BPF_CGROUP_INET6_GETPEERNAME,
+					    NULL);
 	} else {
 		if (ipv6_addr_any(&sk->sk_v6_rcv_saddr))
 			sin->sin6_addr = np->saddr;
 		else
 			sin->sin6_addr = sk->sk_v6_rcv_saddr;
 		sin->sin6_port = inet->inet_sport;
-	}
-	if (cgroup_bpf_enabled)
 		BPF_CGROUP_RUN_SA_PROG_LOCK(sk, (struct sockaddr *)sin,
-					    peer ? BPF_CGROUP_INET6_GETPEERNAME :
-						   BPF_CGROUP_INET6_GETSOCKNAME,
+					    BPF_CGROUP_INET6_GETSOCKNAME,
 					    NULL);
+	}
 	sin->sin6_scope_id = ipv6_iface_scope_id(&sin->sin6_addr,
 						 sk->sk_bound_dev_if);
 	return sizeof(*sin);
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 9008f5796ad4..50611bd63647 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -409,9 +409,8 @@ int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
 		}
 		*addr_len = sizeof(*sin6);
 
-		if (cgroup_bpf_enabled)
-			BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk,
-						(struct sockaddr *)sin6);
+		BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk,
+						      (struct sockaddr *)sin6);
 	}
 
 	if (udp_sk(sk)->gro_enabled)
@@ -1462,7 +1461,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
 		fl6.saddr = np->saddr;
 	fl6.fl6_sport = inet->inet_sport;
 
-	if (cgroup_bpf_enabled && !connected) {
+	if (cgroup_bpf_enabled(BPF_CGROUP_UDP6_SENDMSG) && !connected) {
 		err = BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk,
 					   (struct sockaddr *)sin6, &fl6.saddr);
 		if (err)
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-17 17:23 ` [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt Stanislav Fomichev
@ 2020-12-21 22:22   ` Song Liu
  2020-12-22  2:09     ` sdf
  2020-12-31  6:47     ` Martin KaFai Lau
  2020-12-21 22:25   ` Song Liu
  2020-12-22 19:11   ` Martin KaFai Lau
  2 siblings, 2 replies; 16+ messages in thread
From: Song Liu @ 2020-12-21 22:22 UTC (permalink / raw)
  To: Stanislav Fomichev; +Cc: Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com> wrote:
>
> When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> syscall starts incurring kzalloc/kfree cost. While, in general, it's
> not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> fastpath for incoming TCP, we don't want to have extra allocations in
> there.
>
> Let add a small buffer on the stack and use it for small (majority)
> {s,g}etsockopt values. I've started with 128 bytes to cover
> the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> currently, with some planned extension to 64 + some headroom
> for the future).

I don't really know the rule of thumb, but 128 bytes on stack feels too big to
me. I would like to hear others' opinions on this. Can we solve the problem
with some other mechanisms, e.g. a mempool?

[...]

>
> +static void *sockopt_export_buf(struct bpf_sockopt_kern *ctx)
> +{
> +       void *p;
> +
> +       if (ctx->optval != ctx->buf)
> +               return ctx->optval;
> +
> +       /* We've used bpf_sockopt_kern->buf as an intermediary storage,
> +        * but the BPF program indicates that we need to pass this
> +        * data to the kernel setsockopt handler. No way to export
> +        * on-stack buf, have to allocate a new buffer. The caller
> +        * is responsible for the kfree().
> +        */
> +       p = kzalloc(ctx->optlen, GFP_USER);
> +       if (!p)
> +               return ERR_PTR(-ENOMEM);
> +       memcpy(p, ctx->optval, ctx->optlen);
> +       return p;
> +}
> +
>  int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
>                                        int *optname, char __user *optval,
>                                        int *optlen, char **kernel_optval)
> @@ -1389,8 +1420,14 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
>                  * use original userspace data.
>                  */
>                 if (ctx.optlen != 0) {
> -                       *optlen = ctx.optlen;
> -                       *kernel_optval = ctx.optval;
> +                       void *buf = sockopt_export_buf(&ctx);

I found it is hard to follow the logic here (when to allocate memory, how to
fail over, etc.). Do we have plan to reuse sockopt_export_buf()? If not, it is
probably cleaner to put the logic in __cgroup_bpf_run_filter_setsockopt()?

Thanks,
Song

[...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-17 17:23 ` [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt Stanislav Fomichev
  2020-12-21 22:22   ` Song Liu
@ 2020-12-21 22:25   ` Song Liu
  2020-12-22  2:11     ` sdf
  2020-12-22 19:11   ` Martin KaFai Lau
  2 siblings, 1 reply; 16+ messages in thread
From: Song Liu @ 2020-12-21 22:25 UTC (permalink / raw)
  To: Stanislav Fomichev; +Cc: Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com> wrote:
>
> When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> syscall starts incurring kzalloc/kfree cost. While, in general, it's
> not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> fastpath for incoming TCP, we don't want to have extra allocations in
> there.
>
> Let add a small buffer on the stack and use it for small (majority)
> {s,g}etsockopt values. I've started with 128 bytes to cover
> the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> currently, with some planned extension to 64 + some headroom
> for the future).
>
> It seems natural to do the same for setsockopt, but it's a bit more
> involved when the BPF program modifies the data (where we have to
> kmalloc). The assumption is that for the majority of setsockopt
> calls (which are doing pure BPF options or apply policy) this
> will bring some benefit as well.
>
> Signed-off-by: Stanislav Fomichev <sdf@google.com>

Could you please share some performance numbers for this optimization?

Thanks,
Song

[...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type
  2020-12-17 17:23 ` [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type Stanislav Fomichev
@ 2020-12-21 22:40   ` Song Liu
  2020-12-22  1:57     ` sdf
  0 siblings, 1 reply; 16+ messages in thread
From: Song Liu @ 2020-12-21 22:40 UTC (permalink / raw)
  To: Stanislav Fomichev; +Cc: Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On Thu, Dec 17, 2020 at 9:26 AM Stanislav Fomichev <sdf@google.com> wrote:
>
> When we attach any cgroup hook, the rest (even if unused/unattached) start
> to contribute small overhead. In particular, the one we want to avoid is
> __cgroup_bpf_run_filter_skb which does two redirections to get to
> the cgroup and pushes/pulls skb.
>
> Let's split cgroup_bpf_enabled to be per-attach to make sure
> only used attach types trigger.
>
> I've dropped some existing high-level cgroup_bpf_enabled in some
> places because BPF_PROG_CGROUP_XXX_RUN macros usually have another
> cgroup_bpf_enabled check.
>
> I also had to copy-paste BPF_CGROUP_RUN_SA_PROG_LOCK for
> GETPEERNAME/GETSOCKNAME because type for cgroup_bpf_enabled[type]
> has to be constant and known at compile time.
>
> Signed-off-by: Stanislav Fomichev <sdf@google.com>

[...]

> @@ -252,8 +252,10 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
>  #define BPF_CGROUP_RUN_PROG_INET6_BIND_LOCK(sk, uaddr)                        \
>         BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_INET6_BIND, NULL)
>
> -#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (cgroup_bpf_enabled && \
> -                                           sk->sk_prot->pre_connect)
> +#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk)                                    \
> +       ((cgroup_bpf_enabled(BPF_CGROUP_INET4_CONNECT) ||                      \
> +         cgroup_bpf_enabled(BPF_CGROUP_INET6_CONNECT)) &&                     \
> +        sk->sk_prot->pre_connect)

Patchworks highlighted the following (from checkpatch.pl I guess):

CHECK: Macro argument 'sk' may be better as '(sk)' to avoid precedence issues
#99: FILE: include/linux/bpf-cgroup.h:255:
+#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk)       \
+ ((cgroup_bpf_enabled(BPF_CGROUP_INET4_CONNECT) ||       \
+  cgroup_bpf_enabled(BPF_CGROUP_INET6_CONNECT)) &&       \
+ sk->sk_prot->pre_connect)

Other than, looks good to me.

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type
  2020-12-21 22:40   ` Song Liu
@ 2020-12-22  1:57     ` sdf
  0 siblings, 0 replies; 16+ messages in thread
From: sdf @ 2020-12-22  1:57 UTC (permalink / raw)
  To: Song Liu; +Cc: Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On 12/21, Song Liu wrote:
> On Thu, Dec 17, 2020 at 9:26 AM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > When we attach any cgroup hook, the rest (even if unused/unattached)  
> start
> > to contribute small overhead. In particular, the one we want to avoid is
> > __cgroup_bpf_run_filter_skb which does two redirections to get to
> > the cgroup and pushes/pulls skb.
> >
> > Let's split cgroup_bpf_enabled to be per-attach to make sure
> > only used attach types trigger.
> >
> > I've dropped some existing high-level cgroup_bpf_enabled in some
> > places because BPF_PROG_CGROUP_XXX_RUN macros usually have another
> > cgroup_bpf_enabled check.
> >
> > I also had to copy-paste BPF_CGROUP_RUN_SA_PROG_LOCK for
> > GETPEERNAME/GETSOCKNAME because type for cgroup_bpf_enabled[type]
> > has to be constant and known at compile time.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>

> [...]

> > @@ -252,8 +252,10 @@ int bpf_percpu_cgroup_storage_update(struct  
> bpf_map *map, void *key,
> >  #define BPF_CGROUP_RUN_PROG_INET6_BIND_LOCK(sk,  
> uaddr)                        \
> >         BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_INET6_BIND,  
> NULL)
> >
> > -#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (cgroup_bpf_enabled && \
> > -                                           sk->sk_prot->pre_connect)
> > +#define  
> BPF_CGROUP_PRE_CONNECT_ENABLED(sk)                                    \
> > +       ((cgroup_bpf_enabled(BPF_CGROUP_INET4_CONNECT) | 
> |                      \
> > +         cgroup_bpf_enabled(BPF_CGROUP_INET6_CONNECT))  
> &&                     \
> > +        sk->sk_prot->pre_connect)

> Patchworks highlighted the following (from checkpatch.pl I guess):

> CHECK: Macro argument 'sk' may be better as '(sk)' to avoid precedence  
> issues
> #99: FILE: include/linux/bpf-cgroup.h:255:
> +#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk)       \
> + ((cgroup_bpf_enabled(BPF_CGROUP_INET4_CONNECT) ||       \
> +  cgroup_bpf_enabled(BPF_CGROUP_INET6_CONNECT)) &&       \
> + sk->sk_prot->pre_connect)

> Other than, looks good to me.
Good point, will fix in a respin.

> Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-21 22:22   ` Song Liu
@ 2020-12-22  2:09     ` sdf
  2020-12-31  6:47     ` Martin KaFai Lau
  1 sibling, 0 replies; 16+ messages in thread
From: sdf @ 2020-12-22  2:09 UTC (permalink / raw)
  To: Song Liu; +Cc: Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On 12/21, Song Liu wrote:
> On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > fastpath for incoming TCP, we don't want to have extra allocations in
> > there.
> >
> > Let add a small buffer on the stack and use it for small (majority)
> > {s,g}etsockopt values. I've started with 128 bytes to cover
> > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > currently, with some planned extension to 64 + some headroom
> > for the future).

> I don't really know the rule of thumb, but 128 bytes on stack feels too  
> big to
> me. I would like to hear others' opinions on this. Can we solve the  
> problem
> with some other mechanisms, e.g. a mempool?
Yeah, I'm not sure as well. But given that we have at least 4k stacks,
it didn't feel like too much. And we will be paying those 128 bytes
only when bpf is attached.

Regarding mempool - I guess we can try that, depending on how the
discussion above ends up. I don't see any docs about kmalloc/mempool
overhead vs kmalloc. (and looking at mempool_alloc it seems
that it aways calls pool->alloc and mostly for guarantees, not
performance; correct me if wrong).

> > +static void *sockopt_export_buf(struct bpf_sockopt_kern *ctx)
> > +{
> > +       void *p;
> > +
> > +       if (ctx->optval != ctx->buf)
> > +               return ctx->optval;
> > +
> > +       /* We've used bpf_sockopt_kern->buf as an intermediary storage,
> > +        * but the BPF program indicates that we need to pass this
> > +        * data to the kernel setsockopt handler. No way to export
> > +        * on-stack buf, have to allocate a new buffer. The caller
> > +        * is responsible for the kfree().
> > +        */
> > +       p = kzalloc(ctx->optlen, GFP_USER);
> > +       if (!p)
> > +               return ERR_PTR(-ENOMEM);
> > +       memcpy(p, ctx->optval, ctx->optlen);
> > +       return p;
> > +}
> > +
> >  int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
> >                                        int *optname, char __user  
> *optval,
> >                                        int *optlen, char  
> **kernel_optval)
> > @@ -1389,8 +1420,14 @@ int __cgroup_bpf_run_filter_setsockopt(struct  
> sock *sk, int *level,
> >                  * use original userspace data.
> >                  */
> >                 if (ctx.optlen != 0) {
> > -                       *optlen = ctx.optlen;
> > -                       *kernel_optval = ctx.optval;
> > +                       void *buf = sockopt_export_buf(&ctx);

> I found it is hard to follow the logic here (when to allocate memory, how  
> to
> fail over, etc.). Do we have plan to reuse sockopt_export_buf()? If not,  
> it is
> probably cleaner to put the logic in __cgroup_bpf_run_filter_setsockopt()?
Sure. I guess I can add something like 'sockopt_can_export' that
returns 'ctx->optval == ctx->buf' and depending on that do the kmalloc.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-21 22:25   ` Song Liu
@ 2020-12-22  2:11     ` sdf
  0 siblings, 0 replies; 16+ messages in thread
From: sdf @ 2020-12-22  2:11 UTC (permalink / raw)
  To: Song Liu; +Cc: Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On 12/21, Song Liu wrote:
> On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > fastpath for incoming TCP, we don't want to have extra allocations in
> > there.
> >
> > Let add a small buffer on the stack and use it for small (majority)
> > {s,g}etsockopt values. I've started with 128 bytes to cover
> > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > currently, with some planned extension to 64 + some headroom
> > for the future).
> >
> > It seems natural to do the same for setsockopt, but it's a bit more
> > involved when the BPF program modifies the data (where we have to
> > kmalloc). The assumption is that for the majority of setsockopt
> > calls (which are doing pure BPF options or apply policy) this
> > will bring some benefit as well.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>

> Could you please share some performance numbers for this optimization?
We've found out about this problem by looking at our global google
profiler, where TCP_ZEROCOPY_RECEIVE was showing up higher than usual.

So I don't really have a nice reproducer, but I would assume I can try
to run something like tools/testing/selftests/net/tcp_mmap.c under perf
and see if there is a clear difference.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-17 17:23 ` [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt Stanislav Fomichev
  2020-12-21 22:22   ` Song Liu
  2020-12-21 22:25   ` Song Liu
@ 2020-12-22 19:11   ` Martin KaFai Lau
  2020-12-23  3:09     ` sdf
  2 siblings, 1 reply; 16+ messages in thread
From: Martin KaFai Lau @ 2020-12-22 19:11 UTC (permalink / raw)
  To: Stanislav Fomichev; +Cc: netdev, bpf, ast, daniel

On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote:
> When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> syscall starts incurring kzalloc/kfree cost. While, in general, it's
> not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> fastpath for incoming TCP, we don't want to have extra allocations in
> there.
> 
> Let add a small buffer on the stack and use it for small (majority)
> {s,g}etsockopt values. I've started with 128 bytes to cover
> the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> currently, with some planned extension to 64 + some headroom
> for the future).
> 
> It seems natural to do the same for setsockopt, but it's a bit more
> involved when the BPF program modifies the data (where we have to
> kmalloc). The assumption is that for the majority of setsockopt
> calls (which are doing pure BPF options or apply policy) this
> will bring some benefit as well.
> 
> Signed-off-by: Stanislav Fomichev <sdf@google.com>
> ---
>  include/linux/filter.h |  3 +++
>  kernel/bpf/cgroup.c    | 41 +++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 42 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 29c27656165b..362eb0d7af5d 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -1281,6 +1281,8 @@ struct bpf_sysctl_kern {
>  	u64 tmp_reg;
>  };
>  
> +#define BPF_SOCKOPT_KERN_BUF_SIZE	128
Since these 128 bytes (which then needs to be zero-ed) is modeled after
the TCP_ZEROCOPY_RECEIVE use case, it will be useful to explain
a use case on how the bpf prog will interact with
getsockopt(TCP_ZEROCOPY_RECEIVE).

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-22 19:11   ` Martin KaFai Lau
@ 2020-12-23  3:09     ` sdf
  2020-12-31  6:50       ` Martin KaFai Lau
  0 siblings, 1 reply; 16+ messages in thread
From: sdf @ 2020-12-23  3:09 UTC (permalink / raw)
  To: Martin KaFai Lau; +Cc: netdev, bpf, ast, daniel

On 12/22, Martin KaFai Lau wrote:
> On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote:
> > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > fastpath for incoming TCP, we don't want to have extra allocations in
> > there.
> >
> > Let add a small buffer on the stack and use it for small (majority)
> > {s,g}etsockopt values. I've started with 128 bytes to cover
> > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > currently, with some planned extension to 64 + some headroom
> > for the future).
> >
> > It seems natural to do the same for setsockopt, but it's a bit more
> > involved when the BPF program modifies the data (where we have to
> > kmalloc). The assumption is that for the majority of setsockopt
> > calls (which are doing pure BPF options or apply policy) this
> > will bring some benefit as well.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > ---
> >  include/linux/filter.h |  3 +++
> >  kernel/bpf/cgroup.c    | 41 +++++++++++++++++++++++++++++++++++++++--
> >  2 files changed, 42 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > index 29c27656165b..362eb0d7af5d 100644
> > --- a/include/linux/filter.h
> > +++ b/include/linux/filter.h
> > @@ -1281,6 +1281,8 @@ struct bpf_sysctl_kern {
> >  	u64 tmp_reg;
> >  };
> >
> > +#define BPF_SOCKOPT_KERN_BUF_SIZE	128
> Since these 128 bytes (which then needs to be zero-ed) is modeled after
> the TCP_ZEROCOPY_RECEIVE use case, it will be useful to explain
> a use case on how the bpf prog will interact with
> getsockopt(TCP_ZEROCOPY_RECEIVE).
The only thing I would expect BPF program can do is to return EPERM
to cause application to fallback to non-zerocopy path (and, mostly,
bypass). I don't think BPF can meaningfully mangle the data in struct
tcp_zerocopy_receive.

Does it address your concern? Or do you want me to add a comment or
something?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-21 22:22   ` Song Liu
  2020-12-22  2:09     ` sdf
@ 2020-12-31  6:47     ` Martin KaFai Lau
  2020-12-31 20:14       ` sdf
  1 sibling, 1 reply; 16+ messages in thread
From: Martin KaFai Lau @ 2020-12-31  6:47 UTC (permalink / raw)
  To: Song Liu
  Cc: Stanislav Fomichev, Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote:
> On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > fastpath for incoming TCP, we don't want to have extra allocations in
> > there.
> >
> > Let add a small buffer on the stack and use it for small (majority)
> > {s,g}etsockopt values. I've started with 128 bytes to cover
> > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > currently, with some planned extension to 64 + some headroom
> > for the future).
> 
> I don't really know the rule of thumb, but 128 bytes on stack feels too big to
> me. I would like to hear others' opinions on this. Can we solve the problem
> with some other mechanisms, e.g. a mempool?
It seems the do_tcp_getsockopt() is also having "struct tcp_zerocopy_receive"
in the stack.  I think the buf here is also mimicking
"struct tcp_zerocopy_receive", so should not cause any
new problem.

However, "struct tcp_zerocopy_receive" is only 40 bytes now.  I think it
is better to have a smaller buf for now and increase it later when the
the future needs in "struct tcp_zerocopy_receive" is also upstreamed.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-23  3:09     ` sdf
@ 2020-12-31  6:50       ` Martin KaFai Lau
  2020-12-31 20:18         ` sdf
  0 siblings, 1 reply; 16+ messages in thread
From: Martin KaFai Lau @ 2020-12-31  6:50 UTC (permalink / raw)
  To: sdf; +Cc: netdev, bpf, ast, daniel

On Tue, Dec 22, 2020 at 07:09:33PM -0800, sdf@google.com wrote:
> On 12/22, Martin KaFai Lau wrote:
> > On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote:
> > > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> > > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > > fastpath for incoming TCP, we don't want to have extra allocations in
> > > there.
> > >
> > > Let add a small buffer on the stack and use it for small (majority)
> > > {s,g}etsockopt values. I've started with 128 bytes to cover
> > > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > > currently, with some planned extension to 64 + some headroom
> > > for the future).
> > >
> > > It seems natural to do the same for setsockopt, but it's a bit more
> > > involved when the BPF program modifies the data (where we have to
> > > kmalloc). The assumption is that for the majority of setsockopt
> > > calls (which are doing pure BPF options or apply policy) this
> > > will bring some benefit as well.
> > >
> > > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > > ---
> > >  include/linux/filter.h |  3 +++
> > >  kernel/bpf/cgroup.c    | 41 +++++++++++++++++++++++++++++++++++++++--
> > >  2 files changed, 42 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > > index 29c27656165b..362eb0d7af5d 100644
> > > --- a/include/linux/filter.h
> > > +++ b/include/linux/filter.h
> > > @@ -1281,6 +1281,8 @@ struct bpf_sysctl_kern {
> > >  	u64 tmp_reg;
> > >  };
> > >
> > > +#define BPF_SOCKOPT_KERN_BUF_SIZE	128
> > Since these 128 bytes (which then needs to be zero-ed) is modeled after
> > the TCP_ZEROCOPY_RECEIVE use case, it will be useful to explain
> > a use case on how the bpf prog will interact with
> > getsockopt(TCP_ZEROCOPY_RECEIVE).
> The only thing I would expect BPF program can do is to return EPERM
> to cause application to fallback to non-zerocopy path (and, mostly,
> bypass). I don't think BPF can meaningfully mangle the data in struct
> tcp_zerocopy_receive.
> 
> Does it address your concern? Or do you want me to add a comment or
> something?
I was asking because, while 128 byte may work best for TCP_ZEROCOPY_RECEIVCE,
it is many unnecessary byte-zeroings for most options though.
Hence, I am interested to see if there is a practical bpf
use case for TCP_ZEROCOPY_RECEIVE.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-31  6:47     ` Martin KaFai Lau
@ 2020-12-31 20:14       ` sdf
  2021-01-04 21:01         ` Martin KaFai Lau
  0 siblings, 1 reply; 16+ messages in thread
From: sdf @ 2020-12-31 20:14 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Song Liu, Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On 12/30, Martin KaFai Lau wrote:
> On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote:
> > On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com>  
> wrote:
> > >
> > > When we attach a bpf program to cgroup/getsockopt any other  
> getsockopt()
> > > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > > not an issue, sometimes it is, like in the case of  
> TCP_ZEROCOPY_RECEIVE.
> > > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > > fastpath for incoming TCP, we don't want to have extra allocations in
> > > there.
> > >
> > > Let add a small buffer on the stack and use it for small (majority)
> > > {s,g}etsockopt values. I've started with 128 bytes to cover
> > > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > > currently, with some planned extension to 64 + some headroom
> > > for the future).
> >
> > I don't really know the rule of thumb, but 128 bytes on stack feels too  
> big to
> > me. I would like to hear others' opinions on this. Can we solve the  
> problem
> > with some other mechanisms, e.g. a mempool?
> It seems the do_tcp_getsockopt() is also having "struct  
> tcp_zerocopy_receive"
> in the stack.  I think the buf here is also mimicking
> "struct tcp_zerocopy_receive", so should not cause any
> new problem.
Good point!

> However, "struct tcp_zerocopy_receive" is only 40 bytes now.  I think it
> is better to have a smaller buf for now and increase it later when the
> the future needs in "struct tcp_zerocopy_receive" is also upstreamed.
I can lower it to 64. Or even 40?

I can also try to add something like BUILD_BUG_ON(sizeof(struct
tcp_zerocopy_receive) < BPF_SOCKOPT_KERN_BUF_SIZE) to make sure this
buffer gets adjusted whenever we touch tcp_zerocopy_receive.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-31  6:50       ` Martin KaFai Lau
@ 2020-12-31 20:18         ` sdf
  0 siblings, 0 replies; 16+ messages in thread
From: sdf @ 2020-12-31 20:18 UTC (permalink / raw)
  To: Martin KaFai Lau; +Cc: netdev, bpf, ast, daniel

On 12/30, Martin KaFai Lau wrote:
> On Tue, Dec 22, 2020 at 07:09:33PM -0800, sdf@google.com wrote:
> > On 12/22, Martin KaFai Lau wrote:
> > > On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote:
> > > > When we attach a bpf program to cgroup/getsockopt any other  
> getsockopt()
> > > > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > > > not an issue, sometimes it is, like in the case of  
> TCP_ZEROCOPY_RECEIVE.
> > > > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > > > fastpath for incoming TCP, we don't want to have extra allocations  
> in
> > > > there.
> > > >
> > > > Let add a small buffer on the stack and use it for small (majority)
> > > > {s,g}etsockopt values. I've started with 128 bytes to cover
> > > > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > > > currently, with some planned extension to 64 + some headroom
> > > > for the future).
> > > >
> > > > It seems natural to do the same for setsockopt, but it's a bit more
> > > > involved when the BPF program modifies the data (where we have to
> > > > kmalloc). The assumption is that for the majority of setsockopt
> > > > calls (which are doing pure BPF options or apply policy) this
> > > > will bring some benefit as well.
> > > >
> > > > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > > > ---
> > > >  include/linux/filter.h |  3 +++
> > > >  kernel/bpf/cgroup.c    | 41  
> +++++++++++++++++++++++++++++++++++++++--
> > > >  2 files changed, 42 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > > > index 29c27656165b..362eb0d7af5d 100644
> > > > --- a/include/linux/filter.h
> > > > +++ b/include/linux/filter.h
> > > > @@ -1281,6 +1281,8 @@ struct bpf_sysctl_kern {
> > > >  	u64 tmp_reg;
> > > >  };
> > > >
> > > > +#define BPF_SOCKOPT_KERN_BUF_SIZE	128
> > > Since these 128 bytes (which then needs to be zero-ed) is modeled  
> after
> > > the TCP_ZEROCOPY_RECEIVE use case, it will be useful to explain
> > > a use case on how the bpf prog will interact with
> > > getsockopt(TCP_ZEROCOPY_RECEIVE).
> > The only thing I would expect BPF program can do is to return EPERM
> > to cause application to fallback to non-zerocopy path (and, mostly,
> > bypass). I don't think BPF can meaningfully mangle the data in struct
> > tcp_zerocopy_receive.
> >
> > Does it address your concern? Or do you want me to add a comment or
> > something?
> I was asking because, while 128 byte may work best for  
> TCP_ZEROCOPY_RECEIVCE,
> it is many unnecessary byte-zeroings for most options though.
> Hence, I am interested to see if there is a practical bpf
> use case for TCP_ZEROCOPY_RECEIVE.
I don't see any practical use-case for TCP_ZEROCOPY_RECEIVE right now
(but you never know, maybe somebody would like to count the number
of ZQ calls? inspect the arguments? idk).

Ideally, we should bypass BPF if (optname == TCP_ZEROCOPY_RECEIVE),
but then it's not 'generic' anymore :-/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt
  2020-12-31 20:14       ` sdf
@ 2021-01-04 21:01         ` Martin KaFai Lau
  0 siblings, 0 replies; 16+ messages in thread
From: Martin KaFai Lau @ 2021-01-04 21:01 UTC (permalink / raw)
  To: sdf; +Cc: Song Liu, Networking, bpf, Alexei Starovoitov, Daniel Borkmann

On Thu, Dec 31, 2020 at 12:14:13PM -0800, sdf@google.com wrote:
> On 12/30, Martin KaFai Lau wrote:
> > On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote:
> > > On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@google.com>
> > wrote:
> > > >
> > > > When we attach a bpf program to cgroup/getsockopt any other
> > getsockopt()
> > > > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > > > not an issue, sometimes it is, like in the case of
> > TCP_ZEROCOPY_RECEIVE.
> > > > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > > > fastpath for incoming TCP, we don't want to have extra allocations in
> > > > there.
> > > >
> > > > Let add a small buffer on the stack and use it for small (majority)
> > > > {s,g}etsockopt values. I've started with 128 bytes to cover
> > > > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > > > currently, with some planned extension to 64 + some headroom
> > > > for the future).
> > >
> > > I don't really know the rule of thumb, but 128 bytes on stack feels
> > too big to
> > > me. I would like to hear others' opinions on this. Can we solve the
> > problem
> > > with some other mechanisms, e.g. a mempool?
> > It seems the do_tcp_getsockopt() is also having "struct
> > tcp_zerocopy_receive"
> > in the stack.  I think the buf here is also mimicking
> > "struct tcp_zerocopy_receive", so should not cause any
> > new problem.
> Good point!
> 
> > However, "struct tcp_zerocopy_receive" is only 40 bytes now.  I think it
> > is better to have a smaller buf for now and increase it later when the
> > the future needs in "struct tcp_zerocopy_receive" is also upstreamed.
> I can lower it to 64. Or even 40?
I think either is fine.  Both will need another cacheline on bpf_sockopt_kern.
128 is a bit too much without a clear understanding on what "some headroom
for the future" means.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-01-04 21:02 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-17 17:23 [PATCH bpf-next 0/2] bpf: misc performance improvements for cgroup hooks Stanislav Fomichev
2020-12-17 17:23 ` [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in cgroup/{s,g}etsockopt Stanislav Fomichev
2020-12-21 22:22   ` Song Liu
2020-12-22  2:09     ` sdf
2020-12-31  6:47     ` Martin KaFai Lau
2020-12-31 20:14       ` sdf
2021-01-04 21:01         ` Martin KaFai Lau
2020-12-21 22:25   ` Song Liu
2020-12-22  2:11     ` sdf
2020-12-22 19:11   ` Martin KaFai Lau
2020-12-23  3:09     ` sdf
2020-12-31  6:50       ` Martin KaFai Lau
2020-12-31 20:18         ` sdf
2020-12-17 17:23 ` [PATCH bpf-next 2/2] bpf: split cgroup_bpf_enabled per attach type Stanislav Fomichev
2020-12-21 22:40   ` Song Liu
2020-12-22  1:57     ` sdf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.