bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP
@ 2020-11-12 21:12 Martin KaFai Lau
  2020-11-12 21:13 ` [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-12 21:12 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev

This set is to allow the FENTRY/FEXIT/RAW_TP tracing program to use
bpf_sk_storage.  The first two patches are a cleanup.  The last patch is
tests.  Patch 3 has the required kernel changes to
enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP.

Please see individual patch for details.

v2:
- Rename some of the function prefix from sk_storage to bpf_sk_storage
- Use prefix check instead of substr check

Martin KaFai Lau (4):
  bpf: Folding omem_charge() into sk_storage_charge()
  bpf: Rename some functions in bpf_sk_storage
  bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP

 include/net/bpf_sk_storage.h                  |   2 +
 kernel/trace/bpf_trace.c                      |   5 +
 net/core/bpf_sk_storage.c                     | 135 +++++++++++++-----
 .../bpf/prog_tests/sk_storage_tracing.c       | 135 ++++++++++++++++++
 .../bpf/progs/test_sk_storage_trace_itself.c  |  29 ++++
 .../bpf/progs/test_sk_storage_tracing.c       |  95 ++++++++++++
 6 files changed, 369 insertions(+), 32 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c

-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge()
  2020-11-12 21:12 [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
@ 2020-11-12 21:13 ` Martin KaFai Lau
  2020-11-12 21:48   ` KP Singh
  2020-11-12 21:13 ` [PATCH v2 bpf-next 2/4] bpf: Rename some functions in bpf_sk_storage Martin KaFai Lau
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-12 21:13 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

sk_storage_charge() is the only user of omem_charge().
This patch simplifies it by folding omem_charge() into
sk_storage_charge().

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 net/core/bpf_sk_storage.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index c907f0dc7f87..001eac65e40f 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -15,18 +15,6 @@
 
 DEFINE_BPF_STORAGE_CACHE(sk_cache);
 
-static int omem_charge(struct sock *sk, unsigned int size)
-{
-	/* same check as in sock_kmalloc() */
-	if (size <= sysctl_optmem_max &&
-	    atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
-		atomic_add(size, &sk->sk_omem_alloc);
-		return 0;
-	}
-
-	return -ENOMEM;
-}
-
 static struct bpf_local_storage_data *
 sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
 {
@@ -316,7 +304,16 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
 static int sk_storage_charge(struct bpf_local_storage_map *smap,
 			     void *owner, u32 size)
 {
-	return omem_charge(owner, size);
+	struct sock *sk = (struct sock *)owner;
+
+	/* same check as in sock_kmalloc() */
+	if (size <= sysctl_optmem_max &&
+	    atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
+		atomic_add(size, &sk->sk_omem_alloc);
+		return 0;
+	}
+
+	return -ENOMEM;
 }
 
 static void sk_storage_uncharge(struct bpf_local_storage_map *smap,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 2/4] bpf: Rename some functions in bpf_sk_storage
  2020-11-12 21:12 [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
  2020-11-12 21:13 ` [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
@ 2020-11-12 21:13 ` Martin KaFai Lau
  2020-11-12 21:47   ` KP Singh
  2020-11-12 21:13 ` [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-12 21:13 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev

Rename some of the functions currently prefixed with sk_storage
to bpf_sk_storage.  That will make the next patch have fewer
prefix check and also bring the bpf_sk_storage.c to a more
consistent function naming.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 net/core/bpf_sk_storage.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 001eac65e40f..fd416678f236 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -16,7 +16,7 @@
 DEFINE_BPF_STORAGE_CACHE(sk_cache);
 
 static struct bpf_local_storage_data *
-sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
+bpf_sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
 {
 	struct bpf_local_storage *sk_storage;
 	struct bpf_local_storage_map *smap;
@@ -29,11 +29,11 @@ sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
 	return bpf_local_storage_lookup(sk_storage, smap, cacheit_lockit);
 }
 
-static int sk_storage_delete(struct sock *sk, struct bpf_map *map)
+static int bpf_sk_storage_del(struct sock *sk, struct bpf_map *map)
 {
 	struct bpf_local_storage_data *sdata;
 
-	sdata = sk_storage_lookup(sk, map, false);
+	sdata = bpf_sk_storage_lookup(sk, map, false);
 	if (!sdata)
 		return -ENOENT;
 
@@ -82,7 +82,7 @@ void bpf_sk_storage_free(struct sock *sk)
 		kfree_rcu(sk_storage, rcu);
 }
 
-static void sk_storage_map_free(struct bpf_map *map)
+static void bpf_sk_storage_map_free(struct bpf_map *map)
 {
 	struct bpf_local_storage_map *smap;
 
@@ -91,7 +91,7 @@ static void sk_storage_map_free(struct bpf_map *map)
 	bpf_local_storage_map_free(smap);
 }
 
-static struct bpf_map *sk_storage_map_alloc(union bpf_attr *attr)
+static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
 {
 	struct bpf_local_storage_map *smap;
 
@@ -118,7 +118,7 @@ static void *bpf_fd_sk_storage_lookup_elem(struct bpf_map *map, void *key)
 	fd = *(int *)key;
 	sock = sockfd_lookup(fd, &err);
 	if (sock) {
-		sdata = sk_storage_lookup(sock->sk, map, true);
+		sdata = bpf_sk_storage_lookup(sock->sk, map, true);
 		sockfd_put(sock);
 		return sdata ? sdata->data : NULL;
 	}
@@ -154,7 +154,7 @@ static int bpf_fd_sk_storage_delete_elem(struct bpf_map *map, void *key)
 	fd = *(int *)key;
 	sock = sockfd_lookup(fd, &err);
 	if (sock) {
-		err = sk_storage_delete(sock->sk, map);
+		err = bpf_sk_storage_del(sock->sk, map);
 		sockfd_put(sock);
 		return err;
 	}
@@ -260,7 +260,7 @@ BPF_CALL_4(bpf_sk_storage_get, struct bpf_map *, map, struct sock *, sk,
 	if (!sk || !sk_fullsock(sk) || flags > BPF_SK_STORAGE_GET_F_CREATE)
 		return (unsigned long)NULL;
 
-	sdata = sk_storage_lookup(sk, map, true);
+	sdata = bpf_sk_storage_lookup(sk, map, true);
 	if (sdata)
 		return (unsigned long)sdata->data;
 
@@ -293,7 +293,7 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
 	if (refcount_inc_not_zero(&sk->sk_refcnt)) {
 		int err;
 
-		err = sk_storage_delete(sk, map);
+		err = bpf_sk_storage_del(sk, map);
 		sock_put(sk);
 		return err;
 	}
@@ -301,8 +301,8 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
 	return -ENOENT;
 }
 
-static int sk_storage_charge(struct bpf_local_storage_map *smap,
-			     void *owner, u32 size)
+static int bpf_sk_storage_charge(struct bpf_local_storage_map *smap,
+				 void *owner, u32 size)
 {
 	struct sock *sk = (struct sock *)owner;
 
@@ -316,8 +316,8 @@ static int sk_storage_charge(struct bpf_local_storage_map *smap,
 	return -ENOMEM;
 }
 
-static void sk_storage_uncharge(struct bpf_local_storage_map *smap,
-				void *owner, u32 size)
+static void bpf_sk_storage_uncharge(struct bpf_local_storage_map *smap,
+				    void *owner, u32 size)
 {
 	struct sock *sk = owner;
 
@@ -325,7 +325,7 @@ static void sk_storage_uncharge(struct bpf_local_storage_map *smap,
 }
 
 static struct bpf_local_storage __rcu **
-sk_storage_ptr(void *owner)
+bpf_sk_storage_ptr(void *owner)
 {
 	struct sock *sk = owner;
 
@@ -336,8 +336,8 @@ static int sk_storage_map_btf_id;
 const struct bpf_map_ops sk_storage_map_ops = {
 	.map_meta_equal = bpf_map_meta_equal,
 	.map_alloc_check = bpf_local_storage_map_alloc_check,
-	.map_alloc = sk_storage_map_alloc,
-	.map_free = sk_storage_map_free,
+	.map_alloc = bpf_sk_storage_map_alloc,
+	.map_free = bpf_sk_storage_map_free,
 	.map_get_next_key = notsupp_get_next_key,
 	.map_lookup_elem = bpf_fd_sk_storage_lookup_elem,
 	.map_update_elem = bpf_fd_sk_storage_update_elem,
@@ -345,9 +345,9 @@ const struct bpf_map_ops sk_storage_map_ops = {
 	.map_check_btf = bpf_local_storage_map_check_btf,
 	.map_btf_name = "bpf_local_storage_map",
 	.map_btf_id = &sk_storage_map_btf_id,
-	.map_local_storage_charge = sk_storage_charge,
-	.map_local_storage_uncharge = sk_storage_uncharge,
-	.map_owner_storage_ptr = sk_storage_ptr,
+	.map_local_storage_charge = bpf_sk_storage_charge,
+	.map_local_storage_uncharge = bpf_sk_storage_uncharge,
+	.map_owner_storage_ptr = bpf_sk_storage_ptr,
 };
 
 const struct bpf_func_proto bpf_sk_storage_get_proto = {
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-12 21:12 [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
  2020-11-12 21:13 ` [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
  2020-11-12 21:13 ` [PATCH v2 bpf-next 2/4] bpf: Rename some functions in bpf_sk_storage Martin KaFai Lau
@ 2020-11-12 21:13 ` Martin KaFai Lau
  2020-11-15  1:17   ` Jakub Kicinski
  2020-11-12 21:13 ` [PATCH v2 bpf-next 4/4] bpf: selftest: Use " Martin KaFai Lau
  2020-11-13  3:00 ` [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP patchwork-bot+netdevbpf
  4 siblings, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-12 21:13 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
the bpf_sk_storage_(get|delete) helper, so those tracing programs
can access the sk's bpf_local_storage and the later selftest
will show some examples.

The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
cg sockops...etc which is running either in softirq or
task context.

This patch adds bpf_sk_storage_get_tracing_proto and
bpf_sk_storage_delete_tracing_proto.  They will check
in runtime that the helpers can only be called when serving
softirq or running in a task context.  That should enable
most common tracing use cases on sk.

During the load time, the new tracing_allowed() function
will ensure the tracing prog using the bpf_sk_storage_(get|delete)
helper is not tracing any bpf_sk_storage*() function itself.
The sk is passed as "void *" when calling into bpf_local_storage.

This patch only allows tracing a kernel function.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 include/net/bpf_sk_storage.h |  2 +
 kernel/trace/bpf_trace.c     |  5 +++
 net/core/bpf_sk_storage.c    | 74 ++++++++++++++++++++++++++++++++++++
 3 files changed, 81 insertions(+)

diff --git a/include/net/bpf_sk_storage.h b/include/net/bpf_sk_storage.h
index 3c516dd07caf..0e85713f56df 100644
--- a/include/net/bpf_sk_storage.h
+++ b/include/net/bpf_sk_storage.h
@@ -20,6 +20,8 @@ void bpf_sk_storage_free(struct sock *sk);
 
 extern const struct bpf_func_proto bpf_sk_storage_get_proto;
 extern const struct bpf_func_proto bpf_sk_storage_delete_proto;
+extern const struct bpf_func_proto bpf_sk_storage_get_tracing_proto;
+extern const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto;
 
 struct bpf_local_storage_elem;
 struct bpf_sk_storage_diag;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index e4515b0f62a8..cfce60ad1cb5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -16,6 +16,7 @@
 #include <linux/syscalls.h>
 #include <linux/error-injection.h>
 #include <linux/btf_ids.h>
+#include <net/bpf_sk_storage.h>
 
 #include <uapi/linux/bpf.h>
 #include <uapi/linux/btf.h>
@@ -1735,6 +1736,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_skc_to_tcp_request_sock_proto;
 	case BPF_FUNC_skc_to_udp6_sock:
 		return &bpf_skc_to_udp6_sock_proto;
+	case BPF_FUNC_sk_storage_get:
+		return &bpf_sk_storage_get_tracing_proto;
+	case BPF_FUNC_sk_storage_delete:
+		return &bpf_sk_storage_delete_tracing_proto;
 #endif
 	case BPF_FUNC_seq_printf:
 		return prog->expected_attach_type == BPF_TRACE_ITER ?
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index fd416678f236..359908a7d3c1 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -6,6 +6,7 @@
 #include <linux/types.h>
 #include <linux/spinlock.h>
 #include <linux/bpf.h>
+#include <linux/btf.h>
 #include <linux/btf_ids.h>
 #include <linux/bpf_local_storage.h>
 #include <net/bpf_sk_storage.h>
@@ -378,6 +379,79 @@ const struct bpf_func_proto bpf_sk_storage_delete_proto = {
 	.arg2_type	= ARG_PTR_TO_BTF_ID_SOCK_COMMON,
 };
 
+static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog)
+{
+	const struct btf *btf_vmlinux;
+	const struct btf_type *t;
+	const char *tname;
+	u32 btf_id;
+
+	if (prog->aux->dst_prog)
+		return false;
+
+	/* Ensure the tracing program is not tracing
+	 * any bpf_sk_storage*() function and also
+	 * use the bpf_sk_storage_(get|delete) helper.
+	 */
+	switch (prog->expected_attach_type) {
+	case BPF_TRACE_RAW_TP:
+		/* bpf_sk_storage has no trace point */
+		return true;
+	case BPF_TRACE_FENTRY:
+	case BPF_TRACE_FEXIT:
+		btf_vmlinux = bpf_get_btf_vmlinux();
+		btf_id = prog->aux->attach_btf_id;
+		t = btf_type_by_id(btf_vmlinux, btf_id);
+		tname = btf_name_by_offset(btf_vmlinux, t->name_off);
+		return !!strncmp(tname, "bpf_sk_storage",
+				 strlen("bpf_sk_storage"));
+	default:
+		return false;
+	}
+
+	return false;
+}
+
+BPF_CALL_4(bpf_sk_storage_get_tracing, struct bpf_map *, map, struct sock *, sk,
+	   void *, value, u64, flags)
+{
+	if (!in_serving_softirq() && !in_task())
+		return (unsigned long)NULL;
+
+	return (unsigned long)____bpf_sk_storage_get(map, sk, value, flags);
+}
+
+BPF_CALL_2(bpf_sk_storage_delete_tracing, struct bpf_map *, map,
+	   struct sock *, sk)
+{
+	if (!in_serving_softirq() && !in_task())
+		return -EPERM;
+
+	return ____bpf_sk_storage_delete(map, sk);
+}
+
+const struct bpf_func_proto bpf_sk_storage_get_tracing_proto = {
+	.func		= bpf_sk_storage_get_tracing,
+	.gpl_only	= false,
+	.ret_type	= RET_PTR_TO_MAP_VALUE_OR_NULL,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_BTF_ID,
+	.arg2_btf_id	= &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON],
+	.arg3_type	= ARG_PTR_TO_MAP_VALUE_OR_NULL,
+	.arg4_type	= ARG_ANYTHING,
+	.allowed	= bpf_sk_storage_tracing_allowed,
+};
+
+const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto = {
+	.func		= bpf_sk_storage_delete_tracing,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_BTF_ID,
+	.arg2_btf_id	= &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON],
+	.allowed	= bpf_sk_storage_tracing_allowed,
+};
+
 struct bpf_sk_storage_diag {
 	u32 nr_maps;
 	struct bpf_map *maps[];
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 bpf-next 4/4] bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-12 21:12 [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
                   ` (2 preceding siblings ...)
  2020-11-12 21:13 ` [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
@ 2020-11-12 21:13 ` Martin KaFai Lau
  2020-11-13  3:00 ` [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP patchwork-bot+netdevbpf
  4 siblings, 0 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-12 21:13 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev

This patch tests storing the task's related info into the
bpf_sk_storage by fentry/fexit tracing at listen, accept,
and connect.  It also tests the raw_tp at inet_sock_set_state.

A negative test is done by tracing the bpf_sk_storage_free()
and using bpf_sk_storage_get() at the same time.  It ensures
this bpf program cannot load.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 .../bpf/prog_tests/sk_storage_tracing.c       | 135 ++++++++++++++++++
 .../bpf/progs/test_sk_storage_trace_itself.c  |  29 ++++
 .../bpf/progs/test_sk_storage_tracing.c       |  95 ++++++++++++
 3 files changed, 259 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c

diff --git a/tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c b/tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
new file mode 100644
index 000000000000..2b392590e8ca
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <sys/types.h>
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+#include "test_progs.h"
+#include "network_helpers.h"
+#include "test_sk_storage_trace_itself.skel.h"
+#include "test_sk_storage_tracing.skel.h"
+
+#define LO_ADDR6 "::1"
+#define TEST_COMM "test_progs"
+
+struct sk_stg {
+	__u32 pid;
+	__u32 last_notclose_state;
+	char comm[16];
+};
+
+static struct test_sk_storage_tracing *skel;
+static __u32 duration;
+static pid_t my_pid;
+
+static int check_sk_stg(int sk_fd, __u32 expected_state)
+{
+	struct sk_stg sk_stg;
+	int err;
+
+	err = bpf_map_lookup_elem(bpf_map__fd(skel->maps.sk_stg_map), &sk_fd,
+				  &sk_stg);
+	if (!ASSERT_OK(err, "map_lookup(sk_stg_map)"))
+		return -1;
+
+	if (!ASSERT_EQ(sk_stg.last_notclose_state, expected_state,
+		       "last_notclose_state"))
+		return -1;
+
+	if (!ASSERT_EQ(sk_stg.pid, my_pid, "pid"))
+		return -1;
+
+	if (!ASSERT_STREQ(sk_stg.comm, skel->bss->task_comm, "task_comm"))
+		return -1;
+
+	return 0;
+}
+
+static void do_test(void)
+{
+	int listen_fd = -1, passive_fd = -1, active_fd = -1, value = 1, err;
+	char abyte;
+
+	listen_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
+	if (CHECK(listen_fd == -1, "start_server",
+		  "listen_fd:%d errno:%d\n", listen_fd, errno))
+		return;
+
+	active_fd = connect_to_fd(listen_fd, 0);
+	if (CHECK(active_fd == -1, "connect_to_fd", "active_fd:%d errno:%d\n",
+		  active_fd, errno))
+		goto out;
+
+	err = bpf_map_update_elem(bpf_map__fd(skel->maps.del_sk_stg_map),
+				  &active_fd, &value, 0);
+	if (!ASSERT_OK(err, "map_update(del_sk_stg_map)"))
+		goto out;
+
+	passive_fd = accept(listen_fd, NULL, 0);
+	if (CHECK(passive_fd == -1, "accept", "passive_fd:%d errno:%d\n",
+		  passive_fd, errno))
+		goto out;
+
+	shutdown(active_fd, SHUT_WR);
+	err = read(passive_fd, &abyte, 1);
+	if (!ASSERT_OK(err, "read(passive_fd)"))
+		goto out;
+
+	shutdown(passive_fd, SHUT_WR);
+	err = read(active_fd, &abyte, 1);
+	if (!ASSERT_OK(err, "read(active_fd)"))
+		goto out;
+
+	err = bpf_map_lookup_elem(bpf_map__fd(skel->maps.del_sk_stg_map),
+				  &active_fd, &value);
+	if (!ASSERT_ERR(err, "map_lookup(del_sk_stg_map)"))
+		goto out;
+
+	err = check_sk_stg(listen_fd, BPF_TCP_LISTEN);
+	if (!ASSERT_OK(err, "listen_fd sk_stg"))
+		goto out;
+
+	err = check_sk_stg(active_fd, BPF_TCP_FIN_WAIT2);
+	if (!ASSERT_OK(err, "active_fd sk_stg"))
+		goto out;
+
+	err = check_sk_stg(passive_fd, BPF_TCP_LAST_ACK);
+	ASSERT_OK(err, "passive_fd sk_stg");
+
+out:
+	if (active_fd != -1)
+		close(active_fd);
+	if (passive_fd != -1)
+		close(passive_fd);
+	if (listen_fd != -1)
+		close(listen_fd);
+}
+
+void test_sk_storage_tracing(void)
+{
+	struct test_sk_storage_trace_itself *skel_itself;
+	int err;
+
+	my_pid = getpid();
+
+	skel_itself = test_sk_storage_trace_itself__open_and_load();
+
+	if (!ASSERT_NULL(skel_itself, "test_sk_storage_trace_itself")) {
+		test_sk_storage_trace_itself__destroy(skel_itself);
+		return;
+	}
+
+	skel = test_sk_storage_tracing__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "test_sk_storage_tracing"))
+		return;
+
+	err = test_sk_storage_tracing__attach(skel);
+	if (!ASSERT_OK(err, "test_sk_storage_tracing__attach")) {
+		test_sk_storage_tracing__destroy(skel);
+		return;
+	}
+
+	do_test();
+
+	test_sk_storage_tracing__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c b/tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
new file mode 100644
index 000000000000..59ef72d02a61
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, int);
+} sk_stg_map SEC(".maps");
+
+SEC("fentry/bpf_sk_storage_free")
+int BPF_PROG(trace_bpf_sk_storage_free, struct sock *sk)
+{
+	int *value;
+
+	value = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+				   BPF_SK_STORAGE_GET_F_CREATE);
+
+	if (value)
+		*value = 1;
+
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c b/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
new file mode 100644
index 000000000000..8e94e5c080aa
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_helpers.h>
+
+struct sk_stg {
+	__u32 pid;
+	__u32 last_notclose_state;
+	char comm[16];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, struct sk_stg);
+} sk_stg_map SEC(".maps");
+
+/* Testing delete */
+struct {
+	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, int);
+} del_sk_stg_map SEC(".maps");
+
+char task_comm[16] = "";
+
+SEC("tp_btf/inet_sock_set_state")
+int BPF_PROG(trace_inet_sock_set_state, struct sock *sk, int oldstate,
+	     int newstate)
+{
+	struct sk_stg *stg;
+
+	if (newstate == BPF_TCP_CLOSE)
+		return 0;
+
+	stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+				 BPF_SK_STORAGE_GET_F_CREATE);
+	if (!stg)
+		return 0;
+
+	stg->last_notclose_state = newstate;
+
+	bpf_sk_storage_delete(&del_sk_stg_map, sk);
+
+	return 0;
+}
+
+static void set_task_info(struct sock *sk)
+{
+	struct task_struct *task;
+	struct sk_stg *stg;
+
+	stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+				 BPF_SK_STORAGE_GET_F_CREATE);
+	if (!stg)
+		return;
+
+	stg->pid = bpf_get_current_pid_tgid();
+
+	task = (struct task_struct *)bpf_get_current_task();
+	bpf_core_read_str(&stg->comm, sizeof(stg->comm), &task->comm);
+	bpf_core_read_str(&task_comm, sizeof(task_comm), &task->comm);
+}
+
+SEC("fentry/inet_csk_listen_start")
+int BPF_PROG(trace_inet_csk_listen_start, struct sock *sk, int backlog)
+{
+	set_task_info(sk);
+
+	return 0;
+}
+
+SEC("fentry/tcp_connect")
+int BPF_PROG(trace_tcp_connect, struct sock *sk)
+{
+	set_task_info(sk);
+
+	return 0;
+}
+
+SEC("fexit/inet_csk_accept")
+int BPF_PROG(inet_csk_accept, struct sock *sk, int flags, int *err, bool kern,
+	     struct sock *accepted_sk)
+{
+	set_task_info(accepted_sk);
+
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 2/4] bpf: Rename some functions in bpf_sk_storage
  2020-11-12 21:13 ` [PATCH v2 bpf-next 2/4] bpf: Rename some functions in bpf_sk_storage Martin KaFai Lau
@ 2020-11-12 21:47   ` KP Singh
  0 siblings, 0 replies; 15+ messages in thread
From: KP Singh @ 2020-11-12 21:47 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, Networking

On Thu, Nov 12, 2020 at 10:13 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> Rename some of the functions currently prefixed with sk_storage
> to bpf_sk_storage.  That will make the next patch have fewer
> prefix check and also bring the bpf_sk_storage.c to a more
> consistent function naming.
>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>

Acked-by: KP Singh <kpsingh@google.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge()
  2020-11-12 21:13 ` [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
@ 2020-11-12 21:48   ` KP Singh
  0 siblings, 0 replies; 15+ messages in thread
From: KP Singh @ 2020-11-12 21:48 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team,
	Networking, Song Liu

On Thu, Nov 12, 2020 at 10:13 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> sk_storage_charge() is the only user of omem_charge().
> This patch simplifies it by folding omem_charge() into
> sk_storage_charge().
>
> Acked-by: Song Liu <songliubraving@fb.com>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>

Acked-by: KP Singh <kpsingh@google.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP
  2020-11-12 21:12 [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
                   ` (3 preceding siblings ...)
  2020-11-12 21:13 ` [PATCH v2 bpf-next 4/4] bpf: selftest: Use " Martin KaFai Lau
@ 2020-11-13  3:00 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 15+ messages in thread
From: patchwork-bot+netdevbpf @ 2020-11-13  3:00 UTC (permalink / raw)
  To: Martin KaFai Lau; +Cc: bpf, ast, daniel, kernel-team, netdev

Hello:

This series was applied to bpf/bpf-next.git (refs/heads/master):

On Thu, 12 Nov 2020 13:12:55 -0800 you wrote:
> This set is to allow the FENTRY/FEXIT/RAW_TP tracing program to use
> bpf_sk_storage.  The first two patches are a cleanup.  The last patch is
> tests.  Patch 3 has the required kernel changes to
> enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP.
> 
> Please see individual patch for details.
> 
> [...]

Here is the summary with links:
  - [v2,bpf-next,1/4] bpf: Folding omem_charge() into sk_storage_charge()
    https://git.kernel.org/bpf/bpf-next/c/9e838b02b0bb
  - [v2,bpf-next,2/4] bpf: Rename some functions in bpf_sk_storage
    https://git.kernel.org/bpf/bpf-next/c/e794bfddb8b8
  - [v2,bpf-next,3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
    https://git.kernel.org/bpf/bpf-next/c/8e4597c627fb
  - [v2,bpf-next,4/4] bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP
    https://git.kernel.org/bpf/bpf-next/c/53632e111946

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-12 21:13 ` [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
@ 2020-11-15  1:17   ` Jakub Kicinski
  2020-11-16 17:37     ` Martin KaFai Lau
  0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2020-11-15  1:17 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote:
> This patch adds bpf_sk_storage_get_tracing_proto and
> bpf_sk_storage_delete_tracing_proto.  They will check
> in runtime that the helpers can only be called when serving
> softirq or running in a task context.  That should enable
> most common tracing use cases on sk.

> +	if (!in_serving_softirq() && !in_task())

This is a curious combination of checks. Would you mind indulging me
with an explanation?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-15  1:17   ` Jakub Kicinski
@ 2020-11-16 17:37     ` Martin KaFai Lau
  2020-11-16 18:00       ` Jakub Kicinski
  0 siblings, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-16 17:37 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote:
> On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote:
> > This patch adds bpf_sk_storage_get_tracing_proto and
> > bpf_sk_storage_delete_tracing_proto.  They will check
> > in runtime that the helpers can only be called when serving
> > softirq or running in a task context.  That should enable
> > most common tracing use cases on sk.
> 
> > +	if (!in_serving_softirq() && !in_task())
> 
> This is a curious combination of checks. Would you mind indulging me
> with an explanation?
The current lock usage in bpf_local_storage.c is only expected to
run in either of these contexts.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-16 17:37     ` Martin KaFai Lau
@ 2020-11-16 18:00       ` Jakub Kicinski
  2020-11-16 18:02         ` Jakub Kicinski
  2020-11-16 18:37         ` Martin KaFai Lau
  0 siblings, 2 replies; 15+ messages in thread
From: Jakub Kicinski @ 2020-11-16 18:00 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Mon, 16 Nov 2020 09:37:34 -0800 Martin KaFai Lau wrote:
> On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote:
> > On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote:  
> > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > bpf_sk_storage_delete_tracing_proto.  They will check
> > > in runtime that the helpers can only be called when serving
> > > softirq or running in a task context.  That should enable
> > > most common tracing use cases on sk.  
> >   
> > > +	if (!in_serving_softirq() && !in_task())  
> > 
> > This is a curious combination of checks. Would you mind indulging me
> > with an explanation?  
> The current lock usage in bpf_local_storage.c is only expected to
> run in either of these contexts.

:)

Locks that can run in any context but preempt disabled or softirq
disabled?

Let me cut to the chase. Are you sure you didn't mean to check
if (irq_count()) ?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-16 18:00       ` Jakub Kicinski
@ 2020-11-16 18:02         ` Jakub Kicinski
  2020-11-16 18:37         ` Martin KaFai Lau
  1 sibling, 0 replies; 15+ messages in thread
From: Jakub Kicinski @ 2020-11-16 18:02 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Mon, 16 Nov 2020 10:00:04 -0800 Jakub Kicinski wrote:
> irq_count()

Umpf. I meant (in_irq() || in_nmi()), don't care about sirq.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-16 18:00       ` Jakub Kicinski
  2020-11-16 18:02         ` Jakub Kicinski
@ 2020-11-16 18:37         ` Martin KaFai Lau
  2020-11-16 18:43           ` Jakub Kicinski
  1 sibling, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-16 18:37 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote:
> On Mon, 16 Nov 2020 09:37:34 -0800 Martin KaFai Lau wrote:
> > On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote:
> > > On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote:  
> > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > bpf_sk_storage_delete_tracing_proto.  They will check
> > > > in runtime that the helpers can only be called when serving
> > > > softirq or running in a task context.  That should enable
> > > > most common tracing use cases on sk.  
> > >   
> > > > +	if (!in_serving_softirq() && !in_task())  
> > > 
> > > This is a curious combination of checks. Would you mind indulging me
> > > with an explanation?  
> > The current lock usage in bpf_local_storage.c is only expected to
> > run in either of these contexts.
> 
> :)
> 
> Locks that can run in any context but preempt disabled or softirq
> disabled?
Not exactly. e.g. running from irq won't work.

> 
> Let me cut to the chase. Are you sure you didn't mean to check
> if (irq_count()) ?
so, no.

From preempt.h:

/*
 * ...
 * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
 * ...
 */
#define in_interrupt()          (irq_count())

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-16 18:37         ` Martin KaFai Lau
@ 2020-11-16 18:43           ` Jakub Kicinski
  2020-11-16 18:51             ` Martin KaFai Lau
  0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2020-11-16 18:43 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Mon, 16 Nov 2020 10:37:49 -0800 Martin KaFai Lau wrote:
> On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote:
> > Locks that can run in any context but preempt disabled or softirq
> > disabled?  
> Not exactly. e.g. running from irq won't work.
> 
> > Let me cut to the chase. Are you sure you didn't mean to check
> > if (irq_count()) ?  
> so, no.
> 
> From preempt.h:
> 
> /*
>  * ...
>  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
>  * ...
>  */
> #define in_interrupt()          (irq_count())

Right, as I said in my correction (in_irq() || in_nmi()).

Just to spell it out AFAIU in_serving_softirq() will return true when
softirq is active and interrupted by a hard irq or an NMI.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
  2020-11-16 18:43           ` Jakub Kicinski
@ 2020-11-16 18:51             ` Martin KaFai Lau
  0 siblings, 0 replies; 15+ messages in thread
From: Martin KaFai Lau @ 2020-11-16 18:51 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev, Song Liu

On Mon, Nov 16, 2020 at 10:43:40AM -0800, Jakub Kicinski wrote:
> On Mon, 16 Nov 2020 10:37:49 -0800 Martin KaFai Lau wrote:
> > On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote:
> > > Locks that can run in any context but preempt disabled or softirq
> > > disabled?  
> > Not exactly. e.g. running from irq won't work.
> > 
> > > Let me cut to the chase. Are you sure you didn't mean to check
> > > if (irq_count()) ?  
> > so, no.
> > 
> > From preempt.h:
> > 
> > /*
> >  * ...
> >  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
> >  * ...
> >  */
> > #define in_interrupt()          (irq_count())
> 
> Right, as I said in my correction (in_irq() || in_nmi()).
> 
> Just to spell it out AFAIU in_serving_softirq() will return true when
> softirq is active and interrupted by a hard irq or an NMI.
I see what you have been getting at now.

Good point. will post a fix.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-11-16 18:52 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-12 21:12 [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-12 21:13 ` [PATCH v2 bpf-next 1/4] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
2020-11-12 21:48   ` KP Singh
2020-11-12 21:13 ` [PATCH v2 bpf-next 2/4] bpf: Rename some functions in bpf_sk_storage Martin KaFai Lau
2020-11-12 21:47   ` KP Singh
2020-11-12 21:13 ` [PATCH v2 bpf-next 3/4] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-15  1:17   ` Jakub Kicinski
2020-11-16 17:37     ` Martin KaFai Lau
2020-11-16 18:00       ` Jakub Kicinski
2020-11-16 18:02         ` Jakub Kicinski
2020-11-16 18:37         ` Martin KaFai Lau
2020-11-16 18:43           ` Jakub Kicinski
2020-11-16 18:51             ` Martin KaFai Lau
2020-11-12 21:13 ` [PATCH v2 bpf-next 4/4] bpf: selftest: Use " Martin KaFai Lau
2020-11-13  3:00 ` [PATCH v2 bpf-next 0/4] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).