* [PATCH bpf-next 0/3] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP
@ 2020-11-06 22:07 Martin KaFai Lau
2020-11-06 22:07 ` [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
` (2 more replies)
0 siblings, 3 replies; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-06 22:07 UTC (permalink / raw)
To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev
This set is to allow the FENTRY/FEXIT/RAW_TP tracing program to use
bpf_sk_storage. The first patch is a cleanup. The last patch is
tests. The second patch has the required kernel changes to
enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP.
Please see individual patch for details.
Martin KaFai Lau (3):
bpf: Folding omem_charge() into sk_storage_charge()
bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP
include/net/bpf_sk_storage.h | 2 +
kernel/trace/bpf_trace.c | 5 +
net/core/bpf_sk_storage.c | 96 +++++++++++--
.../bpf/prog_tests/sk_storage_tracing.c | 135 ++++++++++++++++++
.../bpf/progs/test_sk_storage_trace_itself.c | 29 ++++
.../bpf/progs/test_sk_storage_tracing.c | 95 ++++++++++++
6 files changed, 349 insertions(+), 13 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
--
2.24.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge()
2020-11-06 22:07 [PATCH bpf-next 0/3] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
@ 2020-11-06 22:07 ` Martin KaFai Lau
2020-11-06 22:44 ` Song Liu
2020-11-06 22:08 ` [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-06 22:08 ` [PATCH bpf-next 3/3] bpf: selftest: Use " Martin KaFai Lau
2 siblings, 1 reply; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-06 22:07 UTC (permalink / raw)
To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev
sk_storage_charge() is the only user of omem_charge().
This patch simplifies it by folding omem_charge() into
sk_storage_charge().
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
net/core/bpf_sk_storage.c | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index c907f0dc7f87..001eac65e40f 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -15,18 +15,6 @@
DEFINE_BPF_STORAGE_CACHE(sk_cache);
-static int omem_charge(struct sock *sk, unsigned int size)
-{
- /* same check as in sock_kmalloc() */
- if (size <= sysctl_optmem_max &&
- atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
- atomic_add(size, &sk->sk_omem_alloc);
- return 0;
- }
-
- return -ENOMEM;
-}
-
static struct bpf_local_storage_data *
sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
{
@@ -316,7 +304,16 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
static int sk_storage_charge(struct bpf_local_storage_map *smap,
void *owner, u32 size)
{
- return omem_charge(owner, size);
+ struct sock *sk = (struct sock *)owner;
+
+ /* same check as in sock_kmalloc() */
+ if (size <= sysctl_optmem_max &&
+ atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
+ atomic_add(size, &sk->sk_omem_alloc);
+ return 0;
+ }
+
+ return -ENOMEM;
}
static void sk_storage_uncharge(struct bpf_local_storage_map *smap,
--
2.24.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-06 22:07 [PATCH bpf-next 0/3] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-06 22:07 ` [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
@ 2020-11-06 22:08 ` Martin KaFai Lau
2020-11-06 22:59 ` Song Liu
2020-11-07 1:14 ` Andrii Nakryiko
2020-11-06 22:08 ` [PATCH bpf-next 3/3] bpf: selftest: Use " Martin KaFai Lau
2 siblings, 2 replies; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-06 22:08 UTC (permalink / raw)
To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev
This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
the bpf_sk_storage_(get|delete) helper, so those tracing programs
can access the sk's bpf_local_storage and the later selftest
will show some examples.
The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
cg sockops...etc which is running either in softirq or
task context.
This patch adds bpf_sk_storage_get_tracing_proto and
bpf_sk_storage_delete_tracing_proto. They will check
in runtime that the helpers can only be called when serving
softirq or running in a task context. That should enable
most common tracing use cases on sk.
During the load time, the new tracing_allowed() function
will ensure the tracing prog using the bpf_sk_storage_(get|delete)
helper is not tracing any *sk_storage*() function itself.
The sk is passed as "void *" when calling into bpf_local_storage.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
include/net/bpf_sk_storage.h | 2 +
kernel/trace/bpf_trace.c | 5 +++
net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
3 files changed, 80 insertions(+)
diff --git a/include/net/bpf_sk_storage.h b/include/net/bpf_sk_storage.h
index 3c516dd07caf..0e85713f56df 100644
--- a/include/net/bpf_sk_storage.h
+++ b/include/net/bpf_sk_storage.h
@@ -20,6 +20,8 @@ void bpf_sk_storage_free(struct sock *sk);
extern const struct bpf_func_proto bpf_sk_storage_get_proto;
extern const struct bpf_func_proto bpf_sk_storage_delete_proto;
+extern const struct bpf_func_proto bpf_sk_storage_get_tracing_proto;
+extern const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto;
struct bpf_local_storage_elem;
struct bpf_sk_storage_diag;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index e4515b0f62a8..cfce60ad1cb5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -16,6 +16,7 @@
#include <linux/syscalls.h>
#include <linux/error-injection.h>
#include <linux/btf_ids.h>
+#include <net/bpf_sk_storage.h>
#include <uapi/linux/bpf.h>
#include <uapi/linux/btf.h>
@@ -1735,6 +1736,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_skc_to_tcp_request_sock_proto;
case BPF_FUNC_skc_to_udp6_sock:
return &bpf_skc_to_udp6_sock_proto;
+ case BPF_FUNC_sk_storage_get:
+ return &bpf_sk_storage_get_tracing_proto;
+ case BPF_FUNC_sk_storage_delete:
+ return &bpf_sk_storage_delete_tracing_proto;
#endif
case BPF_FUNC_seq_printf:
return prog->expected_attach_type == BPF_TRACE_ITER ?
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 001eac65e40f..1a41c917e08d 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -6,6 +6,7 @@
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/bpf.h>
+#include <linux/btf.h>
#include <linux/btf_ids.h>
#include <linux/bpf_local_storage.h>
#include <net/bpf_sk_storage.h>
@@ -378,6 +379,78 @@ const struct bpf_func_proto bpf_sk_storage_delete_proto = {
.arg2_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON,
};
+static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog)
+{
+ const struct btf *btf_vmlinux;
+ const struct btf_type *t;
+ const char *tname;
+ u32 btf_id;
+
+ if (prog->aux->dst_prog)
+ return false;
+
+ /* Ensure the tracing program is not tracing
+ * any *sk_storage*() function and also
+ * use the bpf_sk_storage_(get|delete) helper.
+ */
+ switch (prog->expected_attach_type) {
+ case BPF_TRACE_RAW_TP:
+ /* bpf_sk_storage has no trace point */
+ return true;
+ case BPF_TRACE_FENTRY:
+ case BPF_TRACE_FEXIT:
+ btf_vmlinux = bpf_get_btf_vmlinux();
+ btf_id = prog->aux->attach_btf_id;
+ t = btf_type_by_id(btf_vmlinux, btf_id);
+ tname = btf_name_by_offset(btf_vmlinux, t->name_off);
+ return !strstr(tname, "sk_storage");
+ default:
+ return false;
+ }
+
+ return false;
+}
+
+BPF_CALL_4(bpf_sk_storage_get_tracing, struct bpf_map *, map, struct sock *, sk,
+ void *, value, u64, flags)
+{
+ if (!in_serving_softirq() && !in_task())
+ return (unsigned long)NULL;
+
+ return (unsigned long)____bpf_sk_storage_get(map, sk, value, flags);
+}
+
+BPF_CALL_2(bpf_sk_storage_delete_tracing, struct bpf_map *, map,
+ struct sock *, sk)
+{
+ if (!in_serving_softirq() && !in_task())
+ return -EPERM;
+
+ return ____bpf_sk_storage_delete(map, sk);
+}
+
+const struct bpf_func_proto bpf_sk_storage_get_tracing_proto = {
+ .func = bpf_sk_storage_get_tracing,
+ .gpl_only = false,
+ .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL,
+ .arg1_type = ARG_CONST_MAP_PTR,
+ .arg2_type = ARG_PTR_TO_BTF_ID,
+ .arg2_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON],
+ .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL,
+ .arg4_type = ARG_ANYTHING,
+ .allowed = bpf_sk_storage_tracing_allowed,
+};
+
+const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto = {
+ .func = bpf_sk_storage_delete_tracing,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_CONST_MAP_PTR,
+ .arg2_type = ARG_PTR_TO_BTF_ID,
+ .arg2_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON],
+ .allowed = bpf_sk_storage_tracing_allowed,
+};
+
struct bpf_sk_storage_diag {
u32 nr_maps;
struct bpf_map *maps[];
--
2.24.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next 3/3] bpf: selftest: Use bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-06 22:07 [PATCH bpf-next 0/3] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-06 22:07 ` [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
2020-11-06 22:08 ` [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
@ 2020-11-06 22:08 ` Martin KaFai Lau
2 siblings, 0 replies; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-06 22:08 UTC (permalink / raw)
To: bpf; +Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, netdev
This patch tests storing the task's related info into the
bpf_sk_storage by fentry/fexit tracing at listen, accept,
and connect. It also tests the raw_tp at inet_sock_set_state.
A negative test is done by tracing the bpf_sk_storage_free()
and using bpf_sk_storage_get() at the same time. It ensures
this bpf program cannot load.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
.../bpf/prog_tests/sk_storage_tracing.c | 135 ++++++++++++++++++
.../bpf/progs/test_sk_storage_trace_itself.c | 29 ++++
.../bpf/progs/test_sk_storage_tracing.c | 95 ++++++++++++
3 files changed, 259 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
create mode 100644 tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
diff --git a/tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c b/tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
new file mode 100644
index 000000000000..2b392590e8ca
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/sk_storage_tracing.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <sys/types.h>
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+#include "test_progs.h"
+#include "network_helpers.h"
+#include "test_sk_storage_trace_itself.skel.h"
+#include "test_sk_storage_tracing.skel.h"
+
+#define LO_ADDR6 "::1"
+#define TEST_COMM "test_progs"
+
+struct sk_stg {
+ __u32 pid;
+ __u32 last_notclose_state;
+ char comm[16];
+};
+
+static struct test_sk_storage_tracing *skel;
+static __u32 duration;
+static pid_t my_pid;
+
+static int check_sk_stg(int sk_fd, __u32 expected_state)
+{
+ struct sk_stg sk_stg;
+ int err;
+
+ err = bpf_map_lookup_elem(bpf_map__fd(skel->maps.sk_stg_map), &sk_fd,
+ &sk_stg);
+ if (!ASSERT_OK(err, "map_lookup(sk_stg_map)"))
+ return -1;
+
+ if (!ASSERT_EQ(sk_stg.last_notclose_state, expected_state,
+ "last_notclose_state"))
+ return -1;
+
+ if (!ASSERT_EQ(sk_stg.pid, my_pid, "pid"))
+ return -1;
+
+ if (!ASSERT_STREQ(sk_stg.comm, skel->bss->task_comm, "task_comm"))
+ return -1;
+
+ return 0;
+}
+
+static void do_test(void)
+{
+ int listen_fd = -1, passive_fd = -1, active_fd = -1, value = 1, err;
+ char abyte;
+
+ listen_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
+ if (CHECK(listen_fd == -1, "start_server",
+ "listen_fd:%d errno:%d\n", listen_fd, errno))
+ return;
+
+ active_fd = connect_to_fd(listen_fd, 0);
+ if (CHECK(active_fd == -1, "connect_to_fd", "active_fd:%d errno:%d\n",
+ active_fd, errno))
+ goto out;
+
+ err = bpf_map_update_elem(bpf_map__fd(skel->maps.del_sk_stg_map),
+ &active_fd, &value, 0);
+ if (!ASSERT_OK(err, "map_update(del_sk_stg_map)"))
+ goto out;
+
+ passive_fd = accept(listen_fd, NULL, 0);
+ if (CHECK(passive_fd == -1, "accept", "passive_fd:%d errno:%d\n",
+ passive_fd, errno))
+ goto out;
+
+ shutdown(active_fd, SHUT_WR);
+ err = read(passive_fd, &abyte, 1);
+ if (!ASSERT_OK(err, "read(passive_fd)"))
+ goto out;
+
+ shutdown(passive_fd, SHUT_WR);
+ err = read(active_fd, &abyte, 1);
+ if (!ASSERT_OK(err, "read(active_fd)"))
+ goto out;
+
+ err = bpf_map_lookup_elem(bpf_map__fd(skel->maps.del_sk_stg_map),
+ &active_fd, &value);
+ if (!ASSERT_ERR(err, "map_lookup(del_sk_stg_map)"))
+ goto out;
+
+ err = check_sk_stg(listen_fd, BPF_TCP_LISTEN);
+ if (!ASSERT_OK(err, "listen_fd sk_stg"))
+ goto out;
+
+ err = check_sk_stg(active_fd, BPF_TCP_FIN_WAIT2);
+ if (!ASSERT_OK(err, "active_fd sk_stg"))
+ goto out;
+
+ err = check_sk_stg(passive_fd, BPF_TCP_LAST_ACK);
+ ASSERT_OK(err, "passive_fd sk_stg");
+
+out:
+ if (active_fd != -1)
+ close(active_fd);
+ if (passive_fd != -1)
+ close(passive_fd);
+ if (listen_fd != -1)
+ close(listen_fd);
+}
+
+void test_sk_storage_tracing(void)
+{
+ struct test_sk_storage_trace_itself *skel_itself;
+ int err;
+
+ my_pid = getpid();
+
+ skel_itself = test_sk_storage_trace_itself__open_and_load();
+
+ if (!ASSERT_NULL(skel_itself, "test_sk_storage_trace_itself")) {
+ test_sk_storage_trace_itself__destroy(skel_itself);
+ return;
+ }
+
+ skel = test_sk_storage_tracing__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_sk_storage_tracing"))
+ return;
+
+ err = test_sk_storage_tracing__attach(skel);
+ if (!ASSERT_OK(err, "test_sk_storage_tracing__attach")) {
+ test_sk_storage_tracing__destroy(skel);
+ return;
+ }
+
+ do_test();
+
+ test_sk_storage_tracing__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c b/tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
new file mode 100644
index 000000000000..59ef72d02a61
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sk_storage_trace_itself.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+struct {
+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, int);
+} sk_stg_map SEC(".maps");
+
+SEC("fentry/bpf_sk_storage_free")
+int BPF_PROG(trace_bpf_sk_storage_free, struct sock *sk)
+{
+ int *value;
+
+ value = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+ BPF_SK_STORAGE_GET_F_CREATE);
+
+ if (value)
+ *value = 1;
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c b/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
new file mode 100644
index 000000000000..8e94e5c080aa
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_helpers.h>
+
+struct sk_stg {
+ __u32 pid;
+ __u32 last_notclose_state;
+ char comm[16];
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct sk_stg);
+} sk_stg_map SEC(".maps");
+
+/* Testing delete */
+struct {
+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, int);
+} del_sk_stg_map SEC(".maps");
+
+char task_comm[16] = "";
+
+SEC("tp_btf/inet_sock_set_state")
+int BPF_PROG(trace_inet_sock_set_state, struct sock *sk, int oldstate,
+ int newstate)
+{
+ struct sk_stg *stg;
+
+ if (newstate == BPF_TCP_CLOSE)
+ return 0;
+
+ stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+ BPF_SK_STORAGE_GET_F_CREATE);
+ if (!stg)
+ return 0;
+
+ stg->last_notclose_state = newstate;
+
+ bpf_sk_storage_delete(&del_sk_stg_map, sk);
+
+ return 0;
+}
+
+static void set_task_info(struct sock *sk)
+{
+ struct task_struct *task;
+ struct sk_stg *stg;
+
+ stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+ BPF_SK_STORAGE_GET_F_CREATE);
+ if (!stg)
+ return;
+
+ stg->pid = bpf_get_current_pid_tgid();
+
+ task = (struct task_struct *)bpf_get_current_task();
+ bpf_core_read_str(&stg->comm, sizeof(stg->comm), &task->comm);
+ bpf_core_read_str(&task_comm, sizeof(task_comm), &task->comm);
+}
+
+SEC("fentry/inet_csk_listen_start")
+int BPF_PROG(trace_inet_csk_listen_start, struct sock *sk, int backlog)
+{
+ set_task_info(sk);
+
+ return 0;
+}
+
+SEC("fentry/tcp_connect")
+int BPF_PROG(trace_tcp_connect, struct sock *sk)
+{
+ set_task_info(sk);
+
+ return 0;
+}
+
+SEC("fexit/inet_csk_accept")
+int BPF_PROG(inet_csk_accept, struct sock *sk, int flags, int *err, bool kern,
+ struct sock *accepted_sk)
+{
+ set_task_info(accepted_sk);
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.24.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge()
2020-11-06 22:07 ` [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
@ 2020-11-06 22:44 ` Song Liu
0 siblings, 0 replies; 18+ messages in thread
From: Song Liu @ 2020-11-06 22:44 UTC (permalink / raw)
To: Martin Lau; +Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, netdev
> On Nov 6, 2020, at 2:07 PM, Martin KaFai Lau <kafai@fb.com> wrote:
>
> sk_storage_charge() is the only user of omem_charge().
> This patch simplifies it by folding omem_charge() into
> sk_storage_charge().
>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
> ---
> net/core/bpf_sk_storage.c | 23 ++++++++++-------------
> 1 file changed, 10 insertions(+), 13 deletions(-)
>
> diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
> index c907f0dc7f87..001eac65e40f 100644
> --- a/net/core/bpf_sk_storage.c
> +++ b/net/core/bpf_sk_storage.c
> @@ -15,18 +15,6 @@
>
> DEFINE_BPF_STORAGE_CACHE(sk_cache);
>
> -static int omem_charge(struct sock *sk, unsigned int size)
> -{
> - /* same check as in sock_kmalloc() */
> - if (size <= sysctl_optmem_max &&
> - atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
> - atomic_add(size, &sk->sk_omem_alloc);
> - return 0;
> - }
> -
> - return -ENOMEM;
> -}
> -
> static struct bpf_local_storage_data *
> sk_storage_lookup(struct sock *sk, struct bpf_map *map, bool cacheit_lockit)
> {
> @@ -316,7 +304,16 @@ BPF_CALL_2(bpf_sk_storage_delete, struct bpf_map *, map, struct sock *, sk)
> static int sk_storage_charge(struct bpf_local_storage_map *smap,
> void *owner, u32 size)
> {
> - return omem_charge(owner, size);
> + struct sock *sk = (struct sock *)owner;
> +
> + /* same check as in sock_kmalloc() */
> + if (size <= sysctl_optmem_max &&
> + atomic_read(&sk->sk_omem_alloc) + size < sysctl_optmem_max) {
> + atomic_add(size, &sk->sk_omem_alloc);
> + return 0;
> + }
> +
> + return -ENOMEM;
> }
>
> static void sk_storage_uncharge(struct bpf_local_storage_map *smap,
> --
> 2.24.1
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-06 22:08 ` [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
@ 2020-11-06 22:59 ` Song Liu
2020-11-06 23:18 ` Martin KaFai Lau
2020-11-07 1:14 ` Andrii Nakryiko
1 sibling, 1 reply; 18+ messages in thread
From: Song Liu @ 2020-11-06 22:59 UTC (permalink / raw)
To: Martin Lau; +Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, netdev
> On Nov 6, 2020, at 2:08 PM, Martin KaFai Lau <kafai@fb.com> wrote:
>
> This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> the bpf_sk_storage_(get|delete) helper, so those tracing programs
> can access the sk's bpf_local_storage and the later selftest
> will show some examples.
>
> The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> cg sockops...etc which is running either in softirq or
> task context.
>
> This patch adds bpf_sk_storage_get_tracing_proto and
> bpf_sk_storage_delete_tracing_proto. They will check
> in runtime that the helpers can only be called when serving
> softirq or running in a task context. That should enable
> most common tracing use cases on sk.
>
> During the load time, the new tracing_allowed() function
> will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> helper is not tracing any *sk_storage*() function itself.
> The sk is passed as "void *" when calling into bpf_local_storage.
>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> ---
> include/net/bpf_sk_storage.h | 2 +
> kernel/trace/bpf_trace.c | 5 +++
> net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 80 insertions(+)
>
> diff --git a/include/net/bpf_sk_storage.h b/include/net/bpf_sk_storage.h
> index 3c516dd07caf..0e85713f56df 100644
> --- a/include/net/bpf_sk_storage.h
> +++ b/include/net/bpf_sk_storage.h
> @@ -20,6 +20,8 @@ void bpf_sk_storage_free(struct sock *sk);
>
> extern const struct bpf_func_proto bpf_sk_storage_get_proto;
> extern const struct bpf_func_proto bpf_sk_storage_delete_proto;
> +extern const struct bpf_func_proto bpf_sk_storage_get_tracing_proto;
> +extern const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto;
>
> struct bpf_local_storage_elem;
> struct bpf_sk_storage_diag;
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index e4515b0f62a8..cfce60ad1cb5 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -16,6 +16,7 @@
> #include <linux/syscalls.h>
> #include <linux/error-injection.h>
> #include <linux/btf_ids.h>
> +#include <net/bpf_sk_storage.h>
>
> #include <uapi/linux/bpf.h>
> #include <uapi/linux/btf.h>
> @@ -1735,6 +1736,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> return &bpf_skc_to_tcp_request_sock_proto;
> case BPF_FUNC_skc_to_udp6_sock:
> return &bpf_skc_to_udp6_sock_proto;
> + case BPF_FUNC_sk_storage_get:
> + return &bpf_sk_storage_get_tracing_proto;
> + case BPF_FUNC_sk_storage_delete:
> + return &bpf_sk_storage_delete_tracing_proto;
> #endif
> case BPF_FUNC_seq_printf:
> return prog->expected_attach_type == BPF_TRACE_ITER ?
> diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
> index 001eac65e40f..1a41c917e08d 100644
> --- a/net/core/bpf_sk_storage.c
> +++ b/net/core/bpf_sk_storage.c
> @@ -6,6 +6,7 @@
> #include <linux/types.h>
> #include <linux/spinlock.h>
> #include <linux/bpf.h>
> +#include <linux/btf.h>
> #include <linux/btf_ids.h>
> #include <linux/bpf_local_storage.h>
> #include <net/bpf_sk_storage.h>
> @@ -378,6 +379,78 @@ const struct bpf_func_proto bpf_sk_storage_delete_proto = {
> .arg2_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON,
> };
>
> +static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog)
> +{
> + const struct btf *btf_vmlinux;
> + const struct btf_type *t;
> + const char *tname;
> + u32 btf_id;
> +
> + if (prog->aux->dst_prog)
> + return false;
> +
> + /* Ensure the tracing program is not tracing
> + * any *sk_storage*() function and also
> + * use the bpf_sk_storage_(get|delete) helper.
> + */
> + switch (prog->expected_attach_type) {
> + case BPF_TRACE_RAW_TP:
> + /* bpf_sk_storage has no trace point */
> + return true;
> + case BPF_TRACE_FENTRY:
> + case BPF_TRACE_FEXIT:
> + btf_vmlinux = bpf_get_btf_vmlinux();
> + btf_id = prog->aux->attach_btf_id;
> + t = btf_type_by_id(btf_vmlinux, btf_id);
What happens to fentry/fexit attach to other BPF programs? I guess
we should check for t == NULL?
Thanks,
Song
> + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> + return !strstr(tname, "sk_storage");
> + default:
> + return false;
> + }
> +
> + return false;
> +}
[...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-06 22:59 ` Song Liu
@ 2020-11-06 23:18 ` Martin KaFai Lau
2020-11-07 0:20 ` Song Liu
0 siblings, 1 reply; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-06 23:18 UTC (permalink / raw)
To: Song Liu; +Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, netdev
On Fri, Nov 06, 2020 at 02:59:14PM -0800, Song Liu wrote:
>
>
> > On Nov 6, 2020, at 2:08 PM, Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > can access the sk's bpf_local_storage and the later selftest
> > will show some examples.
> >
> > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > cg sockops...etc which is running either in softirq or
> > task context.
> >
> > This patch adds bpf_sk_storage_get_tracing_proto and
> > bpf_sk_storage_delete_tracing_proto. They will check
> > in runtime that the helpers can only be called when serving
> > softirq or running in a task context. That should enable
> > most common tracing use cases on sk.
> >
> > During the load time, the new tracing_allowed() function
> > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > helper is not tracing any *sk_storage*() function itself.
> > The sk is passed as "void *" when calling into bpf_local_storage.
> >
> > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > ---
> > include/net/bpf_sk_storage.h | 2 +
> > kernel/trace/bpf_trace.c | 5 +++
> > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > 3 files changed, 80 insertions(+)
> >
> > diff --git a/include/net/bpf_sk_storage.h b/include/net/bpf_sk_storage.h
> > index 3c516dd07caf..0e85713f56df 100644
> > --- a/include/net/bpf_sk_storage.h
> > +++ b/include/net/bpf_sk_storage.h
> > @@ -20,6 +20,8 @@ void bpf_sk_storage_free(struct sock *sk);
> >
> > extern const struct bpf_func_proto bpf_sk_storage_get_proto;
> > extern const struct bpf_func_proto bpf_sk_storage_delete_proto;
> > +extern const struct bpf_func_proto bpf_sk_storage_get_tracing_proto;
> > +extern const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto;
> >
> > struct bpf_local_storage_elem;
> > struct bpf_sk_storage_diag;
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index e4515b0f62a8..cfce60ad1cb5 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -16,6 +16,7 @@
> > #include <linux/syscalls.h>
> > #include <linux/error-injection.h>
> > #include <linux/btf_ids.h>
> > +#include <net/bpf_sk_storage.h>
> >
> > #include <uapi/linux/bpf.h>
> > #include <uapi/linux/btf.h>
> > @@ -1735,6 +1736,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> > return &bpf_skc_to_tcp_request_sock_proto;
> > case BPF_FUNC_skc_to_udp6_sock:
> > return &bpf_skc_to_udp6_sock_proto;
> > + case BPF_FUNC_sk_storage_get:
> > + return &bpf_sk_storage_get_tracing_proto;
> > + case BPF_FUNC_sk_storage_delete:
> > + return &bpf_sk_storage_delete_tracing_proto;
> > #endif
> > case BPF_FUNC_seq_printf:
> > return prog->expected_attach_type == BPF_TRACE_ITER ?
> > diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
> > index 001eac65e40f..1a41c917e08d 100644
> > --- a/net/core/bpf_sk_storage.c
> > +++ b/net/core/bpf_sk_storage.c
> > @@ -6,6 +6,7 @@
> > #include <linux/types.h>
> > #include <linux/spinlock.h>
> > #include <linux/bpf.h>
> > +#include <linux/btf.h>
> > #include <linux/btf_ids.h>
> > #include <linux/bpf_local_storage.h>
> > #include <net/bpf_sk_storage.h>
> > @@ -378,6 +379,78 @@ const struct bpf_func_proto bpf_sk_storage_delete_proto = {
> > .arg2_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON,
> > };
> >
> > +static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog)
> > +{
> > + const struct btf *btf_vmlinux;
> > + const struct btf_type *t;
> > + const char *tname;
> > + u32 btf_id;
> > +
> > + if (prog->aux->dst_prog)
> > + return false;
> > +
> > + /* Ensure the tracing program is not tracing
> > + * any *sk_storage*() function and also
> > + * use the bpf_sk_storage_(get|delete) helper.
> > + */
> > + switch (prog->expected_attach_type) {
> > + case BPF_TRACE_RAW_TP:
> > + /* bpf_sk_storage has no trace point */
> > + return true;
> > + case BPF_TRACE_FENTRY:
> > + case BPF_TRACE_FEXIT:
> > + btf_vmlinux = bpf_get_btf_vmlinux();
> > + btf_id = prog->aux->attach_btf_id;
> > + t = btf_type_by_id(btf_vmlinux, btf_id);
>
> What happens to fentry/fexit attach to other BPF programs? I guess
> we should check for t == NULL?
It does not support tracing BPF program and using bpf_sk_storage
at the same time for now, so there is a "if (prog->aux->dst_prog)" test earlier.
It could be extended to do it later as a follow up.
I missed to mention that in the commit message.
"t" should not be NULL here when tracing a kernel function.
The verifier should have already checked it and ensured "t" is a FUNC.
> > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > + return !strstr(tname, "sk_storage");
> > + default:
> > + return false;
> > + }
> > +
> > + return false;
> > +}
>
> [...]
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-06 23:18 ` Martin KaFai Lau
@ 2020-11-07 0:20 ` Song Liu
0 siblings, 0 replies; 18+ messages in thread
From: Song Liu @ 2020-11-07 0:20 UTC (permalink / raw)
To: Martin Lau; +Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, netdev
> On Nov 6, 2020, at 3:18 PM, Martin Lau <kafai@fb.com> wrote:
>
> On Fri, Nov 06, 2020 at 02:59:14PM -0800, Song Liu wrote:
>>
>>
>>> On Nov 6, 2020, at 2:08 PM, Martin KaFai Lau <kafai@fb.com> wrote:
>>>
[...]
>>> +static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog)
>>> +{
>>> + const struct btf *btf_vmlinux;
>>> + const struct btf_type *t;
>>> + const char *tname;
>>> + u32 btf_id;
>>> +
>>> + if (prog->aux->dst_prog)
>>> + return false;
>>> +
>>> + /* Ensure the tracing program is not tracing
>>> + * any *sk_storage*() function and also
>>> + * use the bpf_sk_storage_(get|delete) helper.
>>> + */
>>> + switch (prog->expected_attach_type) {
>>> + case BPF_TRACE_RAW_TP:
>>> + /* bpf_sk_storage has no trace point */
>>> + return true;
>>> + case BPF_TRACE_FENTRY:
>>> + case BPF_TRACE_FEXIT:
>>> + btf_vmlinux = bpf_get_btf_vmlinux();
>>> + btf_id = prog->aux->attach_btf_id;
>>> + t = btf_type_by_id(btf_vmlinux, btf_id);
>>
>> What happens to fentry/fexit attach to other BPF programs? I guess
>> we should check for t == NULL?
> It does not support tracing BPF program and using bpf_sk_storage
> at the same time for now, so there is a "if (prog->aux->dst_prog)" test earlier.
> It could be extended to do it later as a follow up.
> I missed to mention that in the commit message.
>
> "t" should not be NULL here when tracing a kernel function.
> The verifier should have already checked it and ensured "t" is a FUNC.
Ah, I missed the dst_prog check. Thanks for the explanation.
Acked-by: Song Liu <songliubraving@fb.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-06 22:08 ` [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-06 22:59 ` Song Liu
@ 2020-11-07 1:14 ` Andrii Nakryiko
2020-11-07 1:52 ` Martin KaFai Lau
1 sibling, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2020-11-07 1:14 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, Networking
On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> the bpf_sk_storage_(get|delete) helper, so those tracing programs
> can access the sk's bpf_local_storage and the later selftest
> will show some examples.
>
> The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> cg sockops...etc which is running either in softirq or
> task context.
>
> This patch adds bpf_sk_storage_get_tracing_proto and
> bpf_sk_storage_delete_tracing_proto. They will check
> in runtime that the helpers can only be called when serving
> softirq or running in a task context. That should enable
> most common tracing use cases on sk.
>
> During the load time, the new tracing_allowed() function
> will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> helper is not tracing any *sk_storage*() function itself.
> The sk is passed as "void *" when calling into bpf_local_storage.
>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> ---
> include/net/bpf_sk_storage.h | 2 +
> kernel/trace/bpf_trace.c | 5 +++
> net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 80 insertions(+)
>
[...]
> + switch (prog->expected_attach_type) {
> + case BPF_TRACE_RAW_TP:
> + /* bpf_sk_storage has no trace point */
> + return true;
> + case BPF_TRACE_FENTRY:
> + case BPF_TRACE_FEXIT:
> + btf_vmlinux = bpf_get_btf_vmlinux();
> + btf_id = prog->aux->attach_btf_id;
> + t = btf_type_by_id(btf_vmlinux, btf_id);
> + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> + return !strstr(tname, "sk_storage");
I'm always feeling uneasy about substring checks... Also, KP just
fixed the issue with string-based checks for LSM. Can we use a
BTF_ID_SET of blacklisted functions instead?
> + default:
> + return false;
> + }
> +
> + return false;
> +}
> +
[...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-07 1:14 ` Andrii Nakryiko
@ 2020-11-07 1:52 ` Martin KaFai Lau
2020-11-09 18:09 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-07 1:52 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, Networking
On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > can access the sk's bpf_local_storage and the later selftest
> > will show some examples.
> >
> > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > cg sockops...etc which is running either in softirq or
> > task context.
> >
> > This patch adds bpf_sk_storage_get_tracing_proto and
> > bpf_sk_storage_delete_tracing_proto. They will check
> > in runtime that the helpers can only be called when serving
> > softirq or running in a task context. That should enable
> > most common tracing use cases on sk.
> >
> > During the load time, the new tracing_allowed() function
> > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > helper is not tracing any *sk_storage*() function itself.
> > The sk is passed as "void *" when calling into bpf_local_storage.
> >
> > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > ---
> > include/net/bpf_sk_storage.h | 2 +
> > kernel/trace/bpf_trace.c | 5 +++
> > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > 3 files changed, 80 insertions(+)
> >
>
> [...]
>
> > + switch (prog->expected_attach_type) {
> > + case BPF_TRACE_RAW_TP:
> > + /* bpf_sk_storage has no trace point */
> > + return true;
> > + case BPF_TRACE_FENTRY:
> > + case BPF_TRACE_FEXIT:
> > + btf_vmlinux = bpf_get_btf_vmlinux();
> > + btf_id = prog->aux->attach_btf_id;
> > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > + return !strstr(tname, "sk_storage");
>
> I'm always feeling uneasy about substring checks... Also, KP just
> fixed the issue with string-based checks for LSM. Can we use a
> BTF_ID_SET of blacklisted functions instead?
KP one is different. It accidentally whitelist-ed more than it should.
It is a blacklist here. It is actually cleaner and safer to blacklist
all functions with "sk_storage" and too pessimistic is fine here.
>
> > + default:
> > + return false;
> > + }
> > +
> > + return false;
> > +}
> > +
>
> [...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-07 1:52 ` Martin KaFai Lau
@ 2020-11-09 18:09 ` Andrii Nakryiko
2020-11-09 20:32 ` John Fastabend
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2020-11-09 18:09 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, Networking
On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > >
> > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > can access the sk's bpf_local_storage and the later selftest
> > > will show some examples.
> > >
> > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > cg sockops...etc which is running either in softirq or
> > > task context.
> > >
> > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > bpf_sk_storage_delete_tracing_proto. They will check
> > > in runtime that the helpers can only be called when serving
> > > softirq or running in a task context. That should enable
> > > most common tracing use cases on sk.
> > >
> > > During the load time, the new tracing_allowed() function
> > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > helper is not tracing any *sk_storage*() function itself.
> > > The sk is passed as "void *" when calling into bpf_local_storage.
> > >
> > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > ---
> > > include/net/bpf_sk_storage.h | 2 +
> > > kernel/trace/bpf_trace.c | 5 +++
> > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > 3 files changed, 80 insertions(+)
> > >
> >
> > [...]
> >
> > > + switch (prog->expected_attach_type) {
> > > + case BPF_TRACE_RAW_TP:
> > > + /* bpf_sk_storage has no trace point */
> > > + return true;
> > > + case BPF_TRACE_FENTRY:
> > > + case BPF_TRACE_FEXIT:
> > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > + btf_id = prog->aux->attach_btf_id;
> > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > + return !strstr(tname, "sk_storage");
> >
> > I'm always feeling uneasy about substring checks... Also, KP just
> > fixed the issue with string-based checks for LSM. Can we use a
> > BTF_ID_SET of blacklisted functions instead?
> KP one is different. It accidentally whitelist-ed more than it should.
>
> It is a blacklist here. It is actually cleaner and safer to blacklist
> all functions with "sk_storage" and too pessimistic is fine here.
Fine for whom? Prefix check would be half-bad, but substring check is
horrible. Suddenly "task_storage" (and anything related) would be also
blacklisted. Let's do a prefix check at least.
>
> >
> > > + default:
> > > + return false;
> > > + }
> > > +
> > > + return false;
> > > +}
> > > +
> >
> > [...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-09 18:09 ` Andrii Nakryiko
@ 2020-11-09 20:32 ` John Fastabend
2020-11-10 22:01 ` KP Singh
0 siblings, 1 reply; 18+ messages in thread
From: John Fastabend @ 2020-11-09 20:32 UTC (permalink / raw)
To: Andrii Nakryiko, Martin KaFai Lau
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team, Networking
Andrii Nakryiko wrote:
> On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > >
> > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > can access the sk's bpf_local_storage and the later selftest
> > > > will show some examples.
> > > >
> > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > cg sockops...etc which is running either in softirq or
> > > > task context.
> > > >
> > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > in runtime that the helpers can only be called when serving
> > > > softirq or running in a task context. That should enable
> > > > most common tracing use cases on sk.
> > > >
> > > > During the load time, the new tracing_allowed() function
> > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > helper is not tracing any *sk_storage*() function itself.
> > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > >
> > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > ---
> > > > include/net/bpf_sk_storage.h | 2 +
> > > > kernel/trace/bpf_trace.c | 5 +++
> > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > 3 files changed, 80 insertions(+)
> > > >
> > >
> > > [...]
> > >
> > > > + switch (prog->expected_attach_type) {
> > > > + case BPF_TRACE_RAW_TP:
> > > > + /* bpf_sk_storage has no trace point */
> > > > + return true;
> > > > + case BPF_TRACE_FENTRY:
> > > > + case BPF_TRACE_FEXIT:
> > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > + btf_id = prog->aux->attach_btf_id;
> > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > + return !strstr(tname, "sk_storage");
> > >
> > > I'm always feeling uneasy about substring checks... Also, KP just
> > > fixed the issue with string-based checks for LSM. Can we use a
> > > BTF_ID_SET of blacklisted functions instead?
> > KP one is different. It accidentally whitelist-ed more than it should.
> >
> > It is a blacklist here. It is actually cleaner and safer to blacklist
> > all functions with "sk_storage" and too pessimistic is fine here.
>
> Fine for whom? Prefix check would be half-bad, but substring check is
> horrible. Suddenly "task_storage" (and anything related) would be also
> blacklisted. Let's do a prefix check at least.
>
Agree, prefix check sounds like a good idea. But, just doing a quick
grep seems like it will need at least bpf_sk_storage and sk_storage to
catch everything.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-09 20:32 ` John Fastabend
@ 2020-11-10 22:01 ` KP Singh
2020-11-10 23:43 ` Martin KaFai Lau
0 siblings, 1 reply; 18+ messages in thread
From: KP Singh @ 2020-11-10 22:01 UTC (permalink / raw)
To: John Fastabend
Cc: Andrii Nakryiko, Martin KaFai Lau, bpf, Alexei Starovoitov,
Daniel Borkmann, Kernel Team, Networking
On Mon, Nov 9, 2020 at 9:32 PM John Fastabend <john.fastabend@gmail.com> wrote:
>
> Andrii Nakryiko wrote:
> > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > >
> > > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > >
> > > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > > can access the sk's bpf_local_storage and the later selftest
> > > > > will show some examples.
> > > > >
> > > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > > cg sockops...etc which is running either in softirq or
> > > > > task context.
> > > > >
> > > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > > in runtime that the helpers can only be called when serving
> > > > > softirq or running in a task context. That should enable
> > > > > most common tracing use cases on sk.
> > > > >
> > > > > During the load time, the new tracing_allowed() function
> > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > > >
> > > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > > ---
> > > > > include/net/bpf_sk_storage.h | 2 +
> > > > > kernel/trace/bpf_trace.c | 5 +++
> > > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > > 3 files changed, 80 insertions(+)
> > > > >
> > > >
> > > > [...]
> > > >
> > > > > + switch (prog->expected_attach_type) {
> > > > > + case BPF_TRACE_RAW_TP:
> > > > > + /* bpf_sk_storage has no trace point */
> > > > > + return true;
> > > > > + case BPF_TRACE_FENTRY:
> > > > > + case BPF_TRACE_FEXIT:
> > > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > > + btf_id = prog->aux->attach_btf_id;
> > > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > > + return !strstr(tname, "sk_storage");
> > > >
> > > > I'm always feeling uneasy about substring checks... Also, KP just
> > > > fixed the issue with string-based checks for LSM. Can we use a
> > > > BTF_ID_SET of blacklisted functions instead?
> > > KP one is different. It accidentally whitelist-ed more than it should.
> > >
> > > It is a blacklist here. It is actually cleaner and safer to blacklist
> > > all functions with "sk_storage" and too pessimistic is fine here.
> >
> > Fine for whom? Prefix check would be half-bad, but substring check is
> > horrible. Suddenly "task_storage" (and anything related) would be also
> > blacklisted. Let's do a prefix check at least.
> >
>
> Agree, prefix check sounds like a good idea. But, just doing a quick
> grep seems like it will need at least bpf_sk_storage and sk_storage to
> catch everything.
Is there any reason we are not using BTF ID sets and an allow list similar
to bpf_d_path helper? (apart from the obvious inconvenience of
needing to update the set in the kernel)
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-10 22:01 ` KP Singh
@ 2020-11-10 23:43 ` Martin KaFai Lau
2020-11-10 23:53 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-10 23:43 UTC (permalink / raw)
To: KP Singh
Cc: John Fastabend, Andrii Nakryiko, bpf, Alexei Starovoitov,
Daniel Borkmann, Kernel Team, Networking
On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> On Mon, Nov 9, 2020 at 9:32 PM John Fastabend <john.fastabend@gmail.com> wrote:
> >
> > Andrii Nakryiko wrote:
> > > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > >
> > > > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > >
> > > > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > > > can access the sk's bpf_local_storage and the later selftest
> > > > > > will show some examples.
> > > > > >
> > > > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > > > cg sockops...etc which is running either in softirq or
> > > > > > task context.
> > > > > >
> > > > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > > > in runtime that the helpers can only be called when serving
> > > > > > softirq or running in a task context. That should enable
> > > > > > most common tracing use cases on sk.
> > > > > >
> > > > > > During the load time, the new tracing_allowed() function
> > > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > > > >
> > > > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > > > ---
> > > > > > include/net/bpf_sk_storage.h | 2 +
> > > > > > kernel/trace/bpf_trace.c | 5 +++
> > > > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > > > 3 files changed, 80 insertions(+)
> > > > > >
> > > > >
> > > > > [...]
> > > > >
> > > > > > + switch (prog->expected_attach_type) {
> > > > > > + case BPF_TRACE_RAW_TP:
> > > > > > + /* bpf_sk_storage has no trace point */
> > > > > > + return true;
> > > > > > + case BPF_TRACE_FENTRY:
> > > > > > + case BPF_TRACE_FEXIT:
> > > > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > > > + btf_id = prog->aux->attach_btf_id;
> > > > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > > > + return !strstr(tname, "sk_storage");
> > > > >
> > > > > I'm always feeling uneasy about substring checks... Also, KP just
> > > > > fixed the issue with string-based checks for LSM. Can we use a
> > > > > BTF_ID_SET of blacklisted functions instead?
> > > > KP one is different. It accidentally whitelist-ed more than it should.
> > > >
> > > > It is a blacklist here. It is actually cleaner and safer to blacklist
> > > > all functions with "sk_storage" and too pessimistic is fine here.
> > >
> > > Fine for whom? Prefix check would be half-bad, but substring check is
> > > horrible. Suddenly "task_storage" (and anything related) would be also
> > > blacklisted. Let's do a prefix check at least.
> > >
> >
> > Agree, prefix check sounds like a good idea. But, just doing a quick
> > grep seems like it will need at least bpf_sk_storage and sk_storage to
> > catch everything.
>
> Is there any reason we are not using BTF ID sets and an allow list similar
> to bpf_d_path helper? (apart from the obvious inconvenience of
> needing to update the set in the kernel)
It is a blacklist here, a small recap from commit message.
> During the load time, the new tracing_allowed() function
> will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> helper is not tracing any *sk_storage*() function itself.
> The sk is passed as "void *" when calling into bpf_local_storage.
Both BTF_ID and string-based (either prefix/substr) will work.
The intention is to first disallow a tracing program from tracing
any function in bpf_sk_storage.c and also calling the
bpf_sk_storage_(get|delete) helper at the same time.
This blacklist can be revisited later if there would
be a use case in some of the blacklist-ed
functions (which I doubt).
To use BTF_ID, it needs to consider about if the current (and future)
bpf_sk_storage function can be used in BTF_ID or not:
static, global/external, or inlined.
If BTF_ID is the best way for doing all black/white list, I don't mind
either. I could force some to inline and we need to remember
to revisit the blacklist when the scope of fentry/fexit tracable
function changed, e.g. when static function becomes traceable
later. The future changes to bpf_sk_storage.c will need to
adjust this list also.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-10 23:43 ` Martin KaFai Lau
@ 2020-11-10 23:53 ` Andrii Nakryiko
2020-11-11 0:07 ` Martin KaFai Lau
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2020-11-10 23:53 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: KP Singh, John Fastabend, bpf, Alexei Starovoitov,
Daniel Borkmann, Kernel Team, Networking
On Tue, Nov 10, 2020 at 3:43 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> > On Mon, Nov 9, 2020 at 9:32 PM John Fastabend <john.fastabend@gmail.com> wrote:
> > >
> > > Andrii Nakryiko wrote:
> > > > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > >
> > > > > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > > > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > > >
> > > > > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > > > > can access the sk's bpf_local_storage and the later selftest
> > > > > > > will show some examples.
> > > > > > >
> > > > > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > > > > cg sockops...etc which is running either in softirq or
> > > > > > > task context.
> > > > > > >
> > > > > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > > > > in runtime that the helpers can only be called when serving
> > > > > > > softirq or running in a task context. That should enable
> > > > > > > most common tracing use cases on sk.
> > > > > > >
> > > > > > > During the load time, the new tracing_allowed() function
> > > > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > > > > >
> > > > > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > > > > ---
> > > > > > > include/net/bpf_sk_storage.h | 2 +
> > > > > > > kernel/trace/bpf_trace.c | 5 +++
> > > > > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > > > > 3 files changed, 80 insertions(+)
> > > > > > >
> > > > > >
> > > > > > [...]
> > > > > >
> > > > > > > + switch (prog->expected_attach_type) {
> > > > > > > + case BPF_TRACE_RAW_TP:
> > > > > > > + /* bpf_sk_storage has no trace point */
> > > > > > > + return true;
> > > > > > > + case BPF_TRACE_FENTRY:
> > > > > > > + case BPF_TRACE_FEXIT:
> > > > > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > > > > + btf_id = prog->aux->attach_btf_id;
> > > > > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > > > > + return !strstr(tname, "sk_storage");
> > > > > >
> > > > > > I'm always feeling uneasy about substring checks... Also, KP just
> > > > > > fixed the issue with string-based checks for LSM. Can we use a
> > > > > > BTF_ID_SET of blacklisted functions instead?
> > > > > KP one is different. It accidentally whitelist-ed more than it should.
> > > > >
> > > > > It is a blacklist here. It is actually cleaner and safer to blacklist
> > > > > all functions with "sk_storage" and too pessimistic is fine here.
> > > >
> > > > Fine for whom? Prefix check would be half-bad, but substring check is
> > > > horrible. Suddenly "task_storage" (and anything related) would be also
> > > > blacklisted. Let's do a prefix check at least.
> > > >
> > >
> > > Agree, prefix check sounds like a good idea. But, just doing a quick
> > > grep seems like it will need at least bpf_sk_storage and sk_storage to
> > > catch everything.
> >
> > Is there any reason we are not using BTF ID sets and an allow list similar
> > to bpf_d_path helper? (apart from the obvious inconvenience of
> > needing to update the set in the kernel)
> It is a blacklist here, a small recap from commit message.
>
> > During the load time, the new tracing_allowed() function
> > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > helper is not tracing any *sk_storage*() function itself.
> > The sk is passed as "void *" when calling into bpf_local_storage.
>
> Both BTF_ID and string-based (either prefix/substr) will work.
>
> The intention is to first disallow a tracing program from tracing
> any function in bpf_sk_storage.c and also calling the
> bpf_sk_storage_(get|delete) helper at the same time.
> This blacklist can be revisited later if there would
> be a use case in some of the blacklist-ed
> functions (which I doubt).
>
> To use BTF_ID, it needs to consider about if the current (and future)
> bpf_sk_storage function can be used in BTF_ID or not:
> static, global/external, or inlined.
>
> If BTF_ID is the best way for doing all black/white list, I don't mind
> either. I could force some to inline and we need to remember
> to revisit the blacklist when the scope of fentry/fexit tracable
> function changed, e.g. when static function becomes traceable
You can consider static functions traceable already. Arnaldo landed a
change a day or so ago in pahole that exposes static functions in BTF
and makes it possible to fentry/fexit attach them.
> later. The future changes to bpf_sk_storage.c will need to
> adjust this list also.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-10 23:53 ` Andrii Nakryiko
@ 2020-11-11 0:07 ` Martin KaFai Lau
2020-11-11 0:17 ` Andrii Nakryiko
0 siblings, 1 reply; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-11 0:07 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: KP Singh, John Fastabend, bpf, Alexei Starovoitov,
Daniel Borkmann, Kernel Team, Networking
On Tue, Nov 10, 2020 at 03:53:13PM -0800, Andrii Nakryiko wrote:
> On Tue, Nov 10, 2020 at 3:43 PM Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> > > On Mon, Nov 9, 2020 at 9:32 PM John Fastabend <john.fastabend@gmail.com> wrote:
> > > >
> > > > Andrii Nakryiko wrote:
> > > > > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > >
> > > > > > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > > > > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > > > >
> > > > > > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > > > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > > > > > can access the sk's bpf_local_storage and the later selftest
> > > > > > > > will show some examples.
> > > > > > > >
> > > > > > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > > > > > cg sockops...etc which is running either in softirq or
> > > > > > > > task context.
> > > > > > > >
> > > > > > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > > > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > > > > > in runtime that the helpers can only be called when serving
> > > > > > > > softirq or running in a task context. That should enable
> > > > > > > > most common tracing use cases on sk.
> > > > > > > >
> > > > > > > > During the load time, the new tracing_allowed() function
> > > > > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > > > > > >
> > > > > > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > > > > > ---
> > > > > > > > include/net/bpf_sk_storage.h | 2 +
> > > > > > > > kernel/trace/bpf_trace.c | 5 +++
> > > > > > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > > > > > 3 files changed, 80 insertions(+)
> > > > > > > >
> > > > > > >
> > > > > > > [...]
> > > > > > >
> > > > > > > > + switch (prog->expected_attach_type) {
> > > > > > > > + case BPF_TRACE_RAW_TP:
> > > > > > > > + /* bpf_sk_storage has no trace point */
> > > > > > > > + return true;
> > > > > > > > + case BPF_TRACE_FENTRY:
> > > > > > > > + case BPF_TRACE_FEXIT:
> > > > > > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > > > > > + btf_id = prog->aux->attach_btf_id;
> > > > > > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > > > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > > > > > + return !strstr(tname, "sk_storage");
> > > > > > >
> > > > > > > I'm always feeling uneasy about substring checks... Also, KP just
> > > > > > > fixed the issue with string-based checks for LSM. Can we use a
> > > > > > > BTF_ID_SET of blacklisted functions instead?
> > > > > > KP one is different. It accidentally whitelist-ed more than it should.
> > > > > >
> > > > > > It is a blacklist here. It is actually cleaner and safer to blacklist
> > > > > > all functions with "sk_storage" and too pessimistic is fine here.
> > > > >
> > > > > Fine for whom? Prefix check would be half-bad, but substring check is
> > > > > horrible. Suddenly "task_storage" (and anything related) would be also
> > > > > blacklisted. Let's do a prefix check at least.
> > > > >
> > > >
> > > > Agree, prefix check sounds like a good idea. But, just doing a quick
> > > > grep seems like it will need at least bpf_sk_storage and sk_storage to
> > > > catch everything.
> > >
> > > Is there any reason we are not using BTF ID sets and an allow list similar
> > > to bpf_d_path helper? (apart from the obvious inconvenience of
> > > needing to update the set in the kernel)
> > It is a blacklist here, a small recap from commit message.
> >
> > > During the load time, the new tracing_allowed() function
> > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > helper is not tracing any *sk_storage*() function itself.
> > > The sk is passed as "void *" when calling into bpf_local_storage.
> >
> > Both BTF_ID and string-based (either prefix/substr) will work.
> >
> > The intention is to first disallow a tracing program from tracing
> > any function in bpf_sk_storage.c and also calling the
> > bpf_sk_storage_(get|delete) helper at the same time.
> > This blacklist can be revisited later if there would
> > be a use case in some of the blacklist-ed
> > functions (which I doubt).
> >
> > To use BTF_ID, it needs to consider about if the current (and future)
> > bpf_sk_storage function can be used in BTF_ID or not:
> > static, global/external, or inlined.
> >
> > If BTF_ID is the best way for doing all black/white list, I don't mind
> > either. I could force some to inline and we need to remember
> > to revisit the blacklist when the scope of fentry/fexit tracable
> > function changed, e.g. when static function becomes traceable
>
> You can consider static functions traceable already. Arnaldo landed a
> change a day or so ago in pahole that exposes static functions in BTF
> and makes it possible to fentry/fexit attach them.
Good to know.
Is all static traceable (and can be used in BTF_ID)?
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-11 0:07 ` Martin KaFai Lau
@ 2020-11-11 0:17 ` Andrii Nakryiko
2020-11-11 0:20 ` Martin KaFai Lau
0 siblings, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2020-11-11 0:17 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: KP Singh, John Fastabend, bpf, Alexei Starovoitov,
Daniel Borkmann, Kernel Team, Networking
On Tue, Nov 10, 2020 at 4:07 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Tue, Nov 10, 2020 at 03:53:13PM -0800, Andrii Nakryiko wrote:
> > On Tue, Nov 10, 2020 at 3:43 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > >
> > > On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> > > > On Mon, Nov 9, 2020 at 9:32 PM John Fastabend <john.fastabend@gmail.com> wrote:
> > > > >
> > > > > Andrii Nakryiko wrote:
> > > > > > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > > >
> > > > > > > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > > > > > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > > > > >
> > > > > > > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > > > > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > > > > > > can access the sk's bpf_local_storage and the later selftest
> > > > > > > > > will show some examples.
> > > > > > > > >
> > > > > > > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > > > > > > cg sockops...etc which is running either in softirq or
> > > > > > > > > task context.
> > > > > > > > >
> > > > > > > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > > > > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > > > > > > in runtime that the helpers can only be called when serving
> > > > > > > > > softirq or running in a task context. That should enable
> > > > > > > > > most common tracing use cases on sk.
> > > > > > > > >
> > > > > > > > > During the load time, the new tracing_allowed() function
> > > > > > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > > > > > > ---
> > > > > > > > > include/net/bpf_sk_storage.h | 2 +
> > > > > > > > > kernel/trace/bpf_trace.c | 5 +++
> > > > > > > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > > > > > > 3 files changed, 80 insertions(+)
> > > > > > > > >
> > > > > > > >
> > > > > > > > [...]
> > > > > > > >
> > > > > > > > > + switch (prog->expected_attach_type) {
> > > > > > > > > + case BPF_TRACE_RAW_TP:
> > > > > > > > > + /* bpf_sk_storage has no trace point */
> > > > > > > > > + return true;
> > > > > > > > > + case BPF_TRACE_FENTRY:
> > > > > > > > > + case BPF_TRACE_FEXIT:
> > > > > > > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > > > > > > + btf_id = prog->aux->attach_btf_id;
> > > > > > > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > > > > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > > > > > > + return !strstr(tname, "sk_storage");
> > > > > > > >
> > > > > > > > I'm always feeling uneasy about substring checks... Also, KP just
> > > > > > > > fixed the issue with string-based checks for LSM. Can we use a
> > > > > > > > BTF_ID_SET of blacklisted functions instead?
> > > > > > > KP one is different. It accidentally whitelist-ed more than it should.
> > > > > > >
> > > > > > > It is a blacklist here. It is actually cleaner and safer to blacklist
> > > > > > > all functions with "sk_storage" and too pessimistic is fine here.
> > > > > >
> > > > > > Fine for whom? Prefix check would be half-bad, but substring check is
> > > > > > horrible. Suddenly "task_storage" (and anything related) would be also
> > > > > > blacklisted. Let's do a prefix check at least.
> > > > > >
> > > > >
> > > > > Agree, prefix check sounds like a good idea. But, just doing a quick
> > > > > grep seems like it will need at least bpf_sk_storage and sk_storage to
> > > > > catch everything.
> > > >
> > > > Is there any reason we are not using BTF ID sets and an allow list similar
> > > > to bpf_d_path helper? (apart from the obvious inconvenience of
> > > > needing to update the set in the kernel)
> > > It is a blacklist here, a small recap from commit message.
> > >
> > > > During the load time, the new tracing_allowed() function
> > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > helper is not tracing any *sk_storage*() function itself.
> > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > >
> > > Both BTF_ID and string-based (either prefix/substr) will work.
> > >
> > > The intention is to first disallow a tracing program from tracing
> > > any function in bpf_sk_storage.c and also calling the
> > > bpf_sk_storage_(get|delete) helper at the same time.
> > > This blacklist can be revisited later if there would
> > > be a use case in some of the blacklist-ed
> > > functions (which I doubt).
> > >
> > > To use BTF_ID, it needs to consider about if the current (and future)
> > > bpf_sk_storage function can be used in BTF_ID or not:
> > > static, global/external, or inlined.
> > >
> > > If BTF_ID is the best way for doing all black/white list, I don't mind
> > > either. I could force some to inline and we need to remember
> > > to revisit the blacklist when the scope of fentry/fexit tracable
> > > function changed, e.g. when static function becomes traceable
> >
> > You can consider static functions traceable already. Arnaldo landed a
> > change a day or so ago in pahole that exposes static functions in BTF
> > and makes it possible to fentry/fexit attach them.
> Good to know.
>
> Is all static traceable (and can be used in BTF_ID)?
Only those that end up not inlined, I think. Similarly as with
kprobes. pahole actually checks mcount section to keep only those that
are attachable with ftrace. See [0] for patches.
[0] https://patchwork.kernel.org/project/netdevbpf/list/?series=379377&state=*
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
2020-11-11 0:17 ` Andrii Nakryiko
@ 2020-11-11 0:20 ` Martin KaFai Lau
0 siblings, 0 replies; 18+ messages in thread
From: Martin KaFai Lau @ 2020-11-11 0:20 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: KP Singh, John Fastabend, bpf, Alexei Starovoitov,
Daniel Borkmann, Kernel Team, Networking
On Tue, Nov 10, 2020 at 04:17:06PM -0800, Andrii Nakryiko wrote:
> On Tue, Nov 10, 2020 at 4:07 PM Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > On Tue, Nov 10, 2020 at 03:53:13PM -0800, Andrii Nakryiko wrote:
> > > On Tue, Nov 10, 2020 at 3:43 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > >
> > > > On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> > > > > On Mon, Nov 9, 2020 at 9:32 PM John Fastabend <john.fastabend@gmail.com> wrote:
> > > > > >
> > > > > > Andrii Nakryiko wrote:
> > > > > > > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > > > >
> > > > > > > > On Fri, Nov 06, 2020 at 05:14:14PM -0800, Andrii Nakryiko wrote:
> > > > > > > > > On Fri, Nov 6, 2020 at 2:08 PM Martin KaFai Lau <kafai@fb.com> wrote:
> > > > > > > > > >
> > > > > > > > > > This patch enables the FENTRY/FEXIT/RAW_TP tracing program to use
> > > > > > > > > > the bpf_sk_storage_(get|delete) helper, so those tracing programs
> > > > > > > > > > can access the sk's bpf_local_storage and the later selftest
> > > > > > > > > > will show some examples.
> > > > > > > > > >
> > > > > > > > > > The bpf_sk_storage is currently used in bpf-tcp-cc, tc,
> > > > > > > > > > cg sockops...etc which is running either in softirq or
> > > > > > > > > > task context.
> > > > > > > > > >
> > > > > > > > > > This patch adds bpf_sk_storage_get_tracing_proto and
> > > > > > > > > > bpf_sk_storage_delete_tracing_proto. They will check
> > > > > > > > > > in runtime that the helpers can only be called when serving
> > > > > > > > > > softirq or running in a task context. That should enable
> > > > > > > > > > most common tracing use cases on sk.
> > > > > > > > > >
> > > > > > > > > > During the load time, the new tracing_allowed() function
> > > > > > > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > > > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > > > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> > > > > > > > > > ---
> > > > > > > > > > include/net/bpf_sk_storage.h | 2 +
> > > > > > > > > > kernel/trace/bpf_trace.c | 5 +++
> > > > > > > > > > net/core/bpf_sk_storage.c | 73 ++++++++++++++++++++++++++++++++++++
> > > > > > > > > > 3 files changed, 80 insertions(+)
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > [...]
> > > > > > > > >
> > > > > > > > > > + switch (prog->expected_attach_type) {
> > > > > > > > > > + case BPF_TRACE_RAW_TP:
> > > > > > > > > > + /* bpf_sk_storage has no trace point */
> > > > > > > > > > + return true;
> > > > > > > > > > + case BPF_TRACE_FENTRY:
> > > > > > > > > > + case BPF_TRACE_FEXIT:
> > > > > > > > > > + btf_vmlinux = bpf_get_btf_vmlinux();
> > > > > > > > > > + btf_id = prog->aux->attach_btf_id;
> > > > > > > > > > + t = btf_type_by_id(btf_vmlinux, btf_id);
> > > > > > > > > > + tname = btf_name_by_offset(btf_vmlinux, t->name_off);
> > > > > > > > > > + return !strstr(tname, "sk_storage");
> > > > > > > > >
> > > > > > > > > I'm always feeling uneasy about substring checks... Also, KP just
> > > > > > > > > fixed the issue with string-based checks for LSM. Can we use a
> > > > > > > > > BTF_ID_SET of blacklisted functions instead?
> > > > > > > > KP one is different. It accidentally whitelist-ed more than it should.
> > > > > > > >
> > > > > > > > It is a blacklist here. It is actually cleaner and safer to blacklist
> > > > > > > > all functions with "sk_storage" and too pessimistic is fine here.
> > > > > > >
> > > > > > > Fine for whom? Prefix check would be half-bad, but substring check is
> > > > > > > horrible. Suddenly "task_storage" (and anything related) would be also
> > > > > > > blacklisted. Let's do a prefix check at least.
> > > > > > >
> > > > > >
> > > > > > Agree, prefix check sounds like a good idea. But, just doing a quick
> > > > > > grep seems like it will need at least bpf_sk_storage and sk_storage to
> > > > > > catch everything.
> > > > >
> > > > > Is there any reason we are not using BTF ID sets and an allow list similar
> > > > > to bpf_d_path helper? (apart from the obvious inconvenience of
> > > > > needing to update the set in the kernel)
> > > > It is a blacklist here, a small recap from commit message.
> > > >
> > > > > During the load time, the new tracing_allowed() function
> > > > > will ensure the tracing prog using the bpf_sk_storage_(get|delete)
> > > > > helper is not tracing any *sk_storage*() function itself.
> > > > > The sk is passed as "void *" when calling into bpf_local_storage.
> > > >
> > > > Both BTF_ID and string-based (either prefix/substr) will work.
> > > >
> > > > The intention is to first disallow a tracing program from tracing
> > > > any function in bpf_sk_storage.c and also calling the
> > > > bpf_sk_storage_(get|delete) helper at the same time.
> > > > This blacklist can be revisited later if there would
> > > > be a use case in some of the blacklist-ed
> > > > functions (which I doubt).
> > > >
> > > > To use BTF_ID, it needs to consider about if the current (and future)
> > > > bpf_sk_storage function can be used in BTF_ID or not:
> > > > static, global/external, or inlined.
> > > >
> > > > If BTF_ID is the best way for doing all black/white list, I don't mind
> > > > either. I could force some to inline and we need to remember
> > > > to revisit the blacklist when the scope of fentry/fexit tracable
> > > > function changed, e.g. when static function becomes traceable
> > >
> > > You can consider static functions traceable already. Arnaldo landed a
> > > change a day or so ago in pahole that exposes static functions in BTF
> > > and makes it possible to fentry/fexit attach them.
> > Good to know.
> >
> > Is all static traceable (and can be used in BTF_ID)?
>
> Only those that end up not inlined, I think. Similarly as with
> kprobes. pahole actually checks mcount section to keep only those that
> are attachable with ftrace. See [0] for patches.
>
> [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=379377&state=*
I will go with the prefix then to avoid tagging functions with
inline/noinline.
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2020-11-11 0:20 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-06 22:07 [PATCH bpf-next 0/3] bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-06 22:07 ` [PATCH bpf-next 1/3] bpf: Folding omem_charge() into sk_storage_charge() Martin KaFai Lau
2020-11-06 22:44 ` Song Liu
2020-11-06 22:08 ` [PATCH bpf-next 2/3] bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP Martin KaFai Lau
2020-11-06 22:59 ` Song Liu
2020-11-06 23:18 ` Martin KaFai Lau
2020-11-07 0:20 ` Song Liu
2020-11-07 1:14 ` Andrii Nakryiko
2020-11-07 1:52 ` Martin KaFai Lau
2020-11-09 18:09 ` Andrii Nakryiko
2020-11-09 20:32 ` John Fastabend
2020-11-10 22:01 ` KP Singh
2020-11-10 23:43 ` Martin KaFai Lau
2020-11-10 23:53 ` Andrii Nakryiko
2020-11-11 0:07 ` Martin KaFai Lau
2020-11-11 0:17 ` Andrii Nakryiko
2020-11-11 0:20 ` Martin KaFai Lau
2020-11-06 22:08 ` [PATCH bpf-next 3/3] bpf: selftest: Use " Martin KaFai Lau
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).