All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/4] bpf: Implement two type cast kfuncs
@ 2022-11-20 16:15 Yonghong Song
  2022-11-20 16:15 ` [PATCH bpf-next v2 1/4] bpf: Add support for kfunc set with common btf_ids Yonghong Song
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 16:15 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

Currenty, a non-tracing bpf program typically has a single 'context' argument
with predefined uapi struct type. Following these uapi struct, user is able
to access other fields defined in uapi header. Inside the kernel, the
user-seen 'context' argument is replaced with 'kernel context' (or 'kctx'
in short) which can access more information than what uapi header provides.
To access other info not in uapi header, people typically do two things:
  (1). extend uapi to access more fields rooted from 'context'.
  (2). use bpf_probe_read_kernl() helper to read particular field based on
    kctx.
Using (1) needs uapi change and using (2) makes code more complex since
direct memory access is not allowed.

There are already a few instances trying to access more information from
kctx:       
  . trying to access some fields from perf_event kctx ([1]).
  . trying to access some fields from xdp kctx ([2]).

This patch set tried to allow direct memory access for kctx fields
by introducing bpf_cast_to_kern_ctx() kfunc.

Martin mentioned a use case like type casting below:
  #define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB)))
basically a 'unsigned char *" casted to 'struct skb_shared_info *'. This patch
set tries to support such a use case as well with bpf_rdonly_cast().

For the patch series, Patch 1 added support for a kfunc available to all
prog types. Patch 2 added bpf_cast_to_kern_ctx() kfunc. Patch 3 added
bpf_rdonly_cast() kfunc. Patch 4 added a few positive and negative tests.

  [1] https://lore.kernel.org/bpf/ad15b398-9069-4a0e-48cb-4bb651ec3088@meta.com/
  [2] https://lore.kernel.org/bpf/20221109215242.1279993-1-john.fastabend@gmail.com/

Changelog:
  rfcv1 -> v2:
    - break original one kfunc into two.
    - add missing error checks and error logs.
    - adapt to the new conventions in
      https://lore.kernel.org/all/20221118015614.2013203-1-memxor@gmail.com/
      for example, with __ign and __k suffix.
    - added support in fixup_kfunc_call() to replace kfunc calls with a single mov.

Yonghong Song (4):
  bpf: Add support for kfunc set with common btf_ids
  bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx
  bpf: Add a kfunc for generic type cast
  bpf: Add type cast unit tests

 include/linux/btf.h                           |   5 +
 kernel/bpf/btf.c                              |  33 +++++
 kernel/bpf/helpers.c                          |  25 +++-
 kernel/bpf/verifier.c                         |  45 ++++++-
 .../selftests/bpf/prog_tests/type_cast.c      | 114 ++++++++++++++++++
 tools/testing/selftests/bpf/progs/type_cast.c |  83 +++++++++++++
 6 files changed, 303 insertions(+), 2 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/type_cast.c
 create mode 100644 tools/testing/selftests/bpf/progs/type_cast.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 1/4] bpf: Add support for kfunc set with common btf_ids
  2022-11-20 16:15 [PATCH bpf-next v2 0/4] bpf: Implement two type cast kfuncs Yonghong Song
@ 2022-11-20 16:15 ` Yonghong Song
  2022-11-20 16:15 ` [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx Yonghong Song
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 16:15 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

Later on, we will introduce kfuncs bpf_cast_to_kern_ctx() and
bpf_rdonly_cast() which apply to all program types. Currently kfunc set
only supports individual prog types. This patch added support for kfunc
applying to all program types.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/btf.c     |  8 ++++++++
 kernel/bpf/helpers.c | 13 ++++++++++++-
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index f7d5fab61535..0a3abbe56c5d 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -199,6 +199,7 @@ DEFINE_IDR(btf_idr);
 DEFINE_SPINLOCK(btf_idr_lock);
 
 enum btf_kfunc_hook {
+	BTF_KFUNC_HOOK_COMMON,
 	BTF_KFUNC_HOOK_XDP,
 	BTF_KFUNC_HOOK_TC,
 	BTF_KFUNC_HOOK_STRUCT_OPS,
@@ -7523,6 +7524,8 @@ static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
 static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
 {
 	switch (prog_type) {
+	case BPF_PROG_TYPE_UNSPEC:
+		return BTF_KFUNC_HOOK_COMMON;
 	case BPF_PROG_TYPE_XDP:
 		return BTF_KFUNC_HOOK_XDP;
 	case BPF_PROG_TYPE_SCHED_CLS:
@@ -7551,6 +7554,11 @@ u32 *btf_kfunc_id_set_contains(const struct btf *btf,
 			       u32 kfunc_btf_id)
 {
 	enum btf_kfunc_hook hook;
+	u32 *kfunc_flags;
+
+	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
+	if (kfunc_flags)
+		return kfunc_flags;
 
 	hook = bpf_prog_type_to_kfunc_hook(prog_type);
 	return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 212e791d7452..eaae7f474eda 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1843,6 +1843,14 @@ static const struct btf_kfunc_id_set generic_kfunc_set = {
 	.set   = &generic_btf_ids,
 };
 
+BTF_SET8_START(common_btf_ids)
+BTF_SET8_END(common_btf_ids)
+
+static const struct btf_kfunc_id_set common_kfunc_set = {
+	.owner = THIS_MODULE,
+	.set   = &common_btf_ids,
+};
+
 static int __init kfunc_init(void)
 {
 	int ret;
@@ -1850,7 +1858,10 @@ static int __init kfunc_init(void)
 	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set);
 	if (ret)
 		return ret;
-	return register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &generic_kfunc_set);
+	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &generic_kfunc_set);
+	if (ret)
+		return ret;
+	return register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, &common_kfunc_set);
 }
 
 late_initcall(kfunc_init);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx
  2022-11-20 16:15 [PATCH bpf-next v2 0/4] bpf: Implement two type cast kfuncs Yonghong Song
  2022-11-20 16:15 ` [PATCH bpf-next v2 1/4] bpf: Add support for kfunc set with common btf_ids Yonghong Song
@ 2022-11-20 16:15 ` Yonghong Song
  2022-11-20 18:33   ` Alexei Starovoitov
  2022-11-20 16:15 ` [PATCH bpf-next v2 3/4] bpf: Add a kfunc for generic type cast Yonghong Song
  2022-11-20 16:15 ` [PATCH bpf-next v2 4/4] bpf: Add type cast unit tests Yonghong Song
  3 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 16:15 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

Implement bpf_cast_to_kern_ctx() kfunc which does a type cast
of a uapi ctx object to the corresponding kernel ctx. Previously
if users want to access some data available in kctx but not
in uapi ctx, bpf_probe_read_kernel() helper is needed.
The introduction of bpf_cast_to_kern_ctx() allows direct
memory access which makes code simpler and easier to understand.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/btf.h   |  5 +++++
 kernel/bpf/btf.c      | 25 +++++++++++++++++++++++++
 kernel/bpf/helpers.c  |  6 ++++++
 kernel/bpf/verifier.c | 21 +++++++++++++++++++++
 4 files changed, 57 insertions(+)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index d5b26380a60f..4b5d799f5d02 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -470,6 +470,7 @@ const struct btf_member *
 btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
 		      const struct btf_type *t, enum bpf_prog_type prog_type,
 		      int arg);
+int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type);
 bool btf_types_are_same(const struct btf *btf1, u32 id1,
 			const struct btf *btf2, u32 id2);
 #else
@@ -514,6 +515,10 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
 {
 	return NULL;
 }
+static inline int get_kern_ctx_btf_id(struct bpf_verifier_log *log,
+				      enum bpf_prog_type prog_type) {
+	return -EINVAL;
+}
 static inline bool btf_types_are_same(const struct btf *btf1, u32 id1,
 				      const struct btf *btf2, u32 id2)
 {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 0a3abbe56c5d..bef1b6cfe6b8 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5603,6 +5603,31 @@ static int btf_translate_to_vmlinux(struct bpf_verifier_log *log,
 	return kern_ctx_type->type;
 }
 
+int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type)
+{
+	const struct btf_member *kctx_member;
+	const struct btf_type *conv_struct;
+	const struct btf_type *kctx_type;
+	u32 kctx_type_id;
+
+	conv_struct = bpf_ctx_convert.t;
+	if (!conv_struct) {
+		bpf_log(log, "btf_vmlinux is malformed\n");
+		return -EINVAL;
+	}
+
+	/* get member for kernel ctx type */
+	kctx_member = btf_type_member(conv_struct) + bpf_ctx_convert_map[prog_type] * 2 + 1;
+	kctx_type_id = kctx_member->type;
+	kctx_type = btf_type_by_id(btf_vmlinux, kctx_type_id);
+	if (!btf_type_is_struct(kctx_type)) {
+		bpf_log(log, "kern ctx type id %u is not a struct\n", kctx_type_id);
+		return -EINVAL;
+	}
+
+	return kctx_type_id;
+}
+
 BTF_ID_LIST(bpf_ctx_convert_btf_id)
 BTF_ID(struct, bpf_ctx_convert)
 
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index eaae7f474eda..dc6e994feeb9 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1824,6 +1824,11 @@ struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
 	return __bpf_list_del(head, true);
 }
 
+void *bpf_cast_to_kern_ctx(void *obj)
+{
+	return obj;
+}
+
 __diag_pop();
 
 BTF_SET8_START(generic_btf_ids)
@@ -1844,6 +1849,7 @@ static const struct btf_kfunc_id_set generic_kfunc_set = {
 };
 
 BTF_SET8_START(common_btf_ids)
+BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
 BTF_SET8_END(common_btf_ids)
 
 static const struct btf_kfunc_id_set common_kfunc_set = {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 195d24316750..a18b519c5225 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -8118,6 +8118,7 @@ enum special_kfunc_type {
 	KF_bpf_list_push_back,
 	KF_bpf_list_pop_front,
 	KF_bpf_list_pop_back,
+	KF_bpf_cast_to_kern_ctx,
 };
 
 BTF_SET_START(special_kfunc_set)
@@ -8127,6 +8128,7 @@ BTF_ID(func, bpf_list_push_front)
 BTF_ID(func, bpf_list_push_back)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
+BTF_ID(func, bpf_cast_to_kern_ctx)
 BTF_SET_END(special_kfunc_set)
 
 BTF_ID_LIST(special_kfunc_list)
@@ -8136,6 +8138,7 @@ BTF_ID(func, bpf_list_push_front)
 BTF_ID(func, bpf_list_push_back)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
+BTF_ID(func, bpf_cast_to_kern_ctx)
 
 static enum kfunc_ptr_arg_type
 get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
@@ -8149,6 +8152,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
 	struct bpf_reg_state *reg = &regs[regno];
 	bool arg_mem_size = false;
 
+	if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx])
+		return KF_ARG_PTR_TO_CTX;
+
 	/* In this function, we verify the kfunc's BTF as per the argument type,
 	 * leaving the rest of the verification with respect to the register
 	 * type to our caller. When a set of conditions hold in the BTF type of
@@ -8633,6 +8639,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d expected pointer to ctx, but got %s\n", i, btf_type_str(t));
 				return -EINVAL;
 			}
+
+			if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
+				ret = get_kern_ctx_btf_id(&env->log, resolve_prog_type(env->prog));
+				if (ret < 0)
+					return -EINVAL;
+				meta->arg_constant.value = ret;
+			}
 			break;
 		case KF_ARG_PTR_TO_ALLOC_BTF_ID:
 			if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
@@ -8880,6 +8893,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 				regs[BPF_REG_0].btf = field->list_head.btf;
 				regs[BPF_REG_0].btf_id = field->list_head.value_btf_id;
 				regs[BPF_REG_0].off = field->list_head.node_offset;
+			} else if (meta.func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
+				mark_reg_known_zero(env, regs, BPF_REG_0);
+				regs[BPF_REG_0].type = PTR_TO_BTF_ID;
+				regs[BPF_REG_0].btf = desc_btf;
+				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
 			} else {
 				verbose(env, "kernel function %s unhandled dynamic return type\n",
 					meta.func_name);
@@ -15130,6 +15148,9 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		insn_buf[1] = addr[1];
 		insn_buf[2] = *insn;
 		*cnt = 3;
+	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
+		insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
+		*cnt = 1;
 	}
 	return 0;
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 3/4] bpf: Add a kfunc for generic type cast
  2022-11-20 16:15 [PATCH bpf-next v2 0/4] bpf: Implement two type cast kfuncs Yonghong Song
  2022-11-20 16:15 ` [PATCH bpf-next v2 1/4] bpf: Add support for kfunc set with common btf_ids Yonghong Song
  2022-11-20 16:15 ` [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx Yonghong Song
@ 2022-11-20 16:15 ` Yonghong Song
  2022-11-20 18:42   ` Alexei Starovoitov
  2022-11-20 16:15 ` [PATCH bpf-next v2 4/4] bpf: Add type cast unit tests Yonghong Song
  3 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 16:15 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

Implement bpf_rdonly_cast() which tries to cast the object
to a specified type. This tries to support use case like below:
  #define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB)))
where skb_end_pointer(SKB) is a 'unsigned char *' and needs to
be casted to 'struct skb_shared_info *'.

The signature of bpf_rdonly_cast() looks like
   void *bpf_rdonly_cast(void *obj, __u32 btf_id)
The function returns the same 'obj' but with PTR_TO_BTF_ID with
btf_id. The verifier will ensure btf_id being a struct type.

Since the supported type cast may not reflect what the 'obj'
represents, the returned btf_id is marked as PTR_UNTRUSTED, so
the return value and subsequent pointer chasing cannot be
used as helper/kfunc arguments.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/helpers.c  |  6 ++++++
 kernel/bpf/verifier.c | 26 ++++++++++++++++++++++++--
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index dc6e994feeb9..9d9b91d2d047 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1829,6 +1829,11 @@ void *bpf_cast_to_kern_ctx(void *obj)
 	return obj;
 }
 
+void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k)
+{
+	return obj__ign;
+}
+
 __diag_pop();
 
 BTF_SET8_START(generic_btf_ids)
@@ -1850,6 +1855,7 @@ static const struct btf_kfunc_id_set generic_kfunc_set = {
 
 BTF_SET8_START(common_btf_ids)
 BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
+BTF_ID_FLAGS(func, bpf_rdonly_cast)
 BTF_SET8_END(common_btf_ids)
 
 static const struct btf_kfunc_id_set common_kfunc_set = {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a18b519c5225..3f1094efdb04 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -8119,6 +8119,7 @@ enum special_kfunc_type {
 	KF_bpf_list_pop_front,
 	KF_bpf_list_pop_back,
 	KF_bpf_cast_to_kern_ctx,
+	KF_bpf_rdonly_cast,
 };
 
 BTF_SET_START(special_kfunc_set)
@@ -8129,6 +8130,7 @@ BTF_ID(func, bpf_list_push_back)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
 BTF_ID(func, bpf_cast_to_kern_ctx)
+BTF_ID(func, bpf_rdonly_cast)
 BTF_SET_END(special_kfunc_set)
 
 BTF_ID_LIST(special_kfunc_list)
@@ -8139,6 +8141,7 @@ BTF_ID(func, bpf_list_push_back)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
 BTF_ID(func, bpf_cast_to_kern_ctx)
+BTF_ID(func, bpf_rdonly_cast)
 
 static enum kfunc_ptr_arg_type
 get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
@@ -8769,6 +8772,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	u32 i, nargs, func_id, ptr_type_id;
 	int err, insn_idx = *insn_idx_p;
 	const struct btf_param *args;
+	const struct btf_type *ret_t;
 	struct btf *desc_btf;
 	u32 *kfunc_flags;
 
@@ -8848,7 +8852,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 
 		if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) {
 			if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
-				const struct btf_type *ret_t;
 				struct btf *ret_btf;
 				u32 ret_btf_id;
 
@@ -8898,6 +8901,24 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 				regs[BPF_REG_0].type = PTR_TO_BTF_ID;
 				regs[BPF_REG_0].btf = desc_btf;
 				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
+			} else if (meta.func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
+				if (!capable(CAP_PERFMON)) {
+					verbose(env,
+						"kfunc bpf_rdonly_cast requires CAP_PERFMON capability\n");
+					return -EACCES;
+				}
+
+				ret_t = btf_type_by_id(desc_btf, meta.arg_constant.value);
+				if (!ret_t || !btf_type_is_struct(ret_t)) {
+					verbose(env,
+						"kfunc bpf_rdonly_cast type ID argument must be of a struct\n");
+					return -EINVAL;
+				}
+
+				mark_reg_known_zero(env, regs, BPF_REG_0);
+				regs[BPF_REG_0].type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
+				regs[BPF_REG_0].btf = desc_btf;
+				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
 			} else {
 				verbose(env, "kernel function %s unhandled dynamic return type\n",
 					meta.func_name);
@@ -15148,7 +15169,8 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		insn_buf[1] = addr[1];
 		insn_buf[2] = *insn;
 		*cnt = 3;
-	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
+	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
+		   desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
 		insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
 		*cnt = 1;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 4/4] bpf: Add type cast unit tests
  2022-11-20 16:15 [PATCH bpf-next v2 0/4] bpf: Implement two type cast kfuncs Yonghong Song
                   ` (2 preceding siblings ...)
  2022-11-20 16:15 ` [PATCH bpf-next v2 3/4] bpf: Add a kfunc for generic type cast Yonghong Song
@ 2022-11-20 16:15 ` Yonghong Song
  3 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 16:15 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

Three tests are added. One is from John Fastabend ({1]) which tests
tracing style access for xdp program from the kernel ctx.
Another is a tc test to test both kernel ctx tracing style access
and explicit non-ctx type cast. The third one is for negative tests
including two tests, a tp_bpf test where the bpf_rdonly_cast()
returns a untrusted ptr which cannot be used as helper argument,
and a tracepoint test where the kernel ctx is a u64.

  [1] https://lore.kernel.org/bpf/20221109215242.1279993-1-john.fastabend@gmail.com/

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/type_cast.c      | 114 ++++++++++++++++++
 tools/testing/selftests/bpf/progs/type_cast.c |  83 +++++++++++++
 2 files changed, 197 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/type_cast.c
 create mode 100644 tools/testing/selftests/bpf/progs/type_cast.c

diff --git a/tools/testing/selftests/bpf/prog_tests/type_cast.c b/tools/testing/selftests/bpf/prog_tests/type_cast.c
new file mode 100644
index 000000000000..9317d5fa2635
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/type_cast.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "type_cast.skel.h"
+
+static void test_xdp(void)
+{
+	struct type_cast *skel;
+	int err, prog_fd;
+	char buf[128];
+
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		.data_in = &pkt_v4,
+		.data_size_in = sizeof(pkt_v4),
+		.data_out = buf,
+		.data_size_out = sizeof(buf),
+		.repeat = 1,
+	);
+
+	skel = type_cast__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	bpf_program__set_autoload(skel->progs.md_xdp, true);
+	err = type_cast__load(skel);
+	if (!ASSERT_OK(err, "skel_load"))
+		goto out;
+
+	prog_fd = bpf_program__fd(skel->progs.md_xdp);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, XDP_PASS, "xdp test_run retval");
+
+	ASSERT_EQ(skel->bss->ifindex, 1, "xdp_md ifindex");
+	ASSERT_EQ(skel->bss->ifindex, skel->bss->ingress_ifindex, "xdp_md ingress_ifindex");
+	ASSERT_STREQ(skel->bss->name, "lo", "xdp_md name");
+	ASSERT_NEQ(skel->bss->inum, 0, "xdp_md inum");
+
+out:
+	type_cast__destroy(skel);
+}
+
+static void test_tc(void)
+{
+	struct type_cast *skel;
+	int err, prog_fd;
+
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		.data_in = &pkt_v4,
+		.data_size_in = sizeof(pkt_v4),
+		.repeat = 1,
+	);
+
+	skel = type_cast__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	bpf_program__set_autoload(skel->progs.md_skb, true);
+	err = type_cast__load(skel);
+	if (!ASSERT_OK(err, "skel_load"))
+		goto out;
+
+	prog_fd = bpf_program__fd(skel->progs.md_skb);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, 0, "tc test_run retval");
+
+	ASSERT_EQ(skel->bss->meta_len, 0, "skb meta_len");
+	ASSERT_EQ(skel->bss->frag0_len, 0, "skb frag0_len");
+	ASSERT_NEQ(skel->bss->kskb_len, 0, "skb len");
+	ASSERT_NEQ(skel->bss->kskb2_len, 0, "skb2 len");
+	ASSERT_EQ(skel->bss->kskb_len, skel->bss->kskb2_len, "skb len compare");
+
+out:
+	type_cast__destroy(skel);
+}
+
+static const char * const negative_tests[] = {
+	"untrusted_ptr",
+	"kctx_u64",
+};
+
+static void test_negative(void)
+{
+	struct bpf_program *prog;
+	struct type_cast *skel;
+	int i, err;
+
+	for (i = 0; i < ARRAY_SIZE(negative_tests); i++) {
+		skel = type_cast__open();
+		if (!ASSERT_OK_PTR(skel, "skel_open"))
+			return;
+
+		prog = bpf_object__find_program_by_name(skel->obj, negative_tests[i]);
+		if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+			goto out;
+		bpf_program__set_autoload(prog, true);
+		err = type_cast__load(skel);
+		ASSERT_ERR(err, "skel_load");
+out:
+		type_cast__destroy(skel);
+	}
+}
+
+void test_type_cast(void)
+{
+	if (test__start_subtest("xdp"))
+		test_xdp();
+	if (test__start_subtest("tc"))
+		test_tc();
+	if (test__start_subtest("negative"))
+		test_negative();
+}
diff --git a/tools/testing/selftests/bpf/progs/type_cast.c b/tools/testing/selftests/bpf/progs/type_cast.c
new file mode 100644
index 000000000000..eb78e6f03129
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/type_cast.c
@@ -0,0 +1,83 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+struct {
+	__uint(type, BPF_MAP_TYPE_TASK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, long);
+} enter_id SEC(".maps");
+
+#define	IFNAMSIZ 16
+
+int ifindex, ingress_ifindex;
+char name[IFNAMSIZ];
+unsigned int inum;
+unsigned int meta_len, frag0_len, kskb_len, kskb2_len;
+
+void *bpf_cast_to_kern_ctx(void *) __ksym;
+void *bpf_rdonly_cast(void *, __u32) __ksym;
+
+SEC("?xdp")
+int md_xdp(struct xdp_md *ctx)
+{
+	struct xdp_buff *kctx = bpf_cast_to_kern_ctx(ctx);
+	struct net_device *dev;
+
+	dev = kctx->rxq->dev;
+	ifindex = dev->ifindex;
+	inum = dev->nd_net.net->ns.inum;
+	__builtin_memcpy(name, dev->name, IFNAMSIZ);
+	ingress_ifindex = ctx->ingress_ifindex;
+	return XDP_PASS;
+}
+
+SEC("?tc")
+int md_skb(struct __sk_buff *skb)
+{
+	struct sk_buff *kskb = bpf_cast_to_kern_ctx(skb);
+	struct skb_shared_info *shared_info;
+	struct sk_buff *kskb2;
+
+	kskb_len = kskb->len;
+
+	/* Simulate the following kernel macro:
+	 *   #define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB)))
+	 */
+	shared_info = bpf_rdonly_cast(kskb->head + kskb->end,
+		bpf_core_type_id_kernel(struct skb_shared_info));
+	meta_len = shared_info->meta_len;
+	frag0_len = shared_info->frag_list->len;
+
+	/* kskb2 should be equal to kskb */
+	kskb2 = bpf_rdonly_cast(kskb, bpf_core_type_id_kernel(struct sk_buff));
+	kskb2_len = kskb2->len;
+	return 0;
+}
+
+SEC("?tp_btf/sys_enter")
+int BPF_PROG(untrusted_ptr, struct pt_regs *regs, long id)
+{
+	struct task_struct *task, *task_dup;
+	long *ptr;
+
+	task = bpf_get_current_task_btf();
+	task_dup = bpf_rdonly_cast(task, bpf_core_type_id_kernel(struct task_struct));
+	(void)bpf_task_storage_get(&enter_id, task_dup, 0, 0);
+	return 0;
+}
+
+SEC("?tracepoint/syscalls/sys_enter_nanosleep")
+int kctx_u64(void *ctx)
+{
+	u64 *kctx = bpf_rdonly_cast(ctx, bpf_core_type_id_kernel(u64));
+
+	(void)kctx;
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx
  2022-11-20 16:15 ` [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx Yonghong Song
@ 2022-11-20 18:33   ` Alexei Starovoitov
  2022-11-20 18:55     ` Yonghong Song
  0 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2022-11-20 18:33 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

On Sun, Nov 20, 2022 at 08:15:22AM -0800, Yonghong Song wrote:
> Implement bpf_cast_to_kern_ctx() kfunc which does a type cast
> of a uapi ctx object to the corresponding kernel ctx. Previously
> if users want to access some data available in kctx but not
> in uapi ctx, bpf_probe_read_kernel() helper is needed.
> The introduction of bpf_cast_to_kern_ctx() allows direct
> memory access which makes code simpler and easier to understand.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  include/linux/btf.h   |  5 +++++
>  kernel/bpf/btf.c      | 25 +++++++++++++++++++++++++
>  kernel/bpf/helpers.c  |  6 ++++++
>  kernel/bpf/verifier.c | 21 +++++++++++++++++++++
>  4 files changed, 57 insertions(+)
> 
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index d5b26380a60f..4b5d799f5d02 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -470,6 +470,7 @@ const struct btf_member *
>  btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>  		      const struct btf_type *t, enum bpf_prog_type prog_type,
>  		      int arg);
> +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type);
>  bool btf_types_are_same(const struct btf *btf1, u32 id1,
>  			const struct btf *btf2, u32 id2);
>  #else
> @@ -514,6 +515,10 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>  {
>  	return NULL;
>  }
> +static inline int get_kern_ctx_btf_id(struct bpf_verifier_log *log,
> +				      enum bpf_prog_type prog_type) {
> +	return -EINVAL;
> +}
>  static inline bool btf_types_are_same(const struct btf *btf1, u32 id1,
>  				      const struct btf *btf2, u32 id2)
>  {
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 0a3abbe56c5d..bef1b6cfe6b8 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -5603,6 +5603,31 @@ static int btf_translate_to_vmlinux(struct bpf_verifier_log *log,
>  	return kern_ctx_type->type;
>  }
>  
> +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type)
> +{
> +	const struct btf_member *kctx_member;
> +	const struct btf_type *conv_struct;
> +	const struct btf_type *kctx_type;
> +	u32 kctx_type_id;
> +
> +	conv_struct = bpf_ctx_convert.t;
> +	if (!conv_struct) {
> +		bpf_log(log, "btf_vmlinux is malformed\n");
> +		return -EINVAL;
> +	}

If we get to this point this internal pointer would be already checked.
No need to check it again. Just use it.

> +
> +	/* get member for kernel ctx type */
> +	kctx_member = btf_type_member(conv_struct) + bpf_ctx_convert_map[prog_type] * 2 + 1;
> +	kctx_type_id = kctx_member->type;
> +	kctx_type = btf_type_by_id(btf_vmlinux, kctx_type_id);
> +	if (!btf_type_is_struct(kctx_type)) {
> +		bpf_log(log, "kern ctx type id %u is not a struct\n", kctx_type_id);
> +		return -EINVAL;
> +	}
> +
> +	return kctx_type_id;
> +}
> +
>  BTF_ID_LIST(bpf_ctx_convert_btf_id)
>  BTF_ID(struct, bpf_ctx_convert)
>  
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index eaae7f474eda..dc6e994feeb9 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1824,6 +1824,11 @@ struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
>  	return __bpf_list_del(head, true);
>  }
>  
> +void *bpf_cast_to_kern_ctx(void *obj)
> +{
> +	return obj;
> +}
> +
>  __diag_pop();
>  
>  BTF_SET8_START(generic_btf_ids)
> @@ -1844,6 +1849,7 @@ static const struct btf_kfunc_id_set generic_kfunc_set = {
>  };
>  
>  BTF_SET8_START(common_btf_ids)
> +BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
>  BTF_SET8_END(common_btf_ids)
>  
>  static const struct btf_kfunc_id_set common_kfunc_set = {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 195d24316750..a18b519c5225 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -8118,6 +8118,7 @@ enum special_kfunc_type {
>  	KF_bpf_list_push_back,
>  	KF_bpf_list_pop_front,
>  	KF_bpf_list_pop_back,
> +	KF_bpf_cast_to_kern_ctx,
>  };
>  
>  BTF_SET_START(special_kfunc_set)
> @@ -8127,6 +8128,7 @@ BTF_ID(func, bpf_list_push_front)
>  BTF_ID(func, bpf_list_push_back)
>  BTF_ID(func, bpf_list_pop_front)
>  BTF_ID(func, bpf_list_pop_back)
> +BTF_ID(func, bpf_cast_to_kern_ctx)
>  BTF_SET_END(special_kfunc_set)
>  
>  BTF_ID_LIST(special_kfunc_list)
> @@ -8136,6 +8138,7 @@ BTF_ID(func, bpf_list_push_front)
>  BTF_ID(func, bpf_list_push_back)
>  BTF_ID(func, bpf_list_pop_front)
>  BTF_ID(func, bpf_list_pop_back)
> +BTF_ID(func, bpf_cast_to_kern_ctx)
>  
>  static enum kfunc_ptr_arg_type
>  get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
> @@ -8149,6 +8152,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
>  	struct bpf_reg_state *reg = &regs[regno];
>  	bool arg_mem_size = false;
>  
> +	if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx])
> +		return KF_ARG_PTR_TO_CTX;
> +
>  	/* In this function, we verify the kfunc's BTF as per the argument type,
>  	 * leaving the rest of the verification with respect to the register
>  	 * type to our caller. When a set of conditions hold in the BTF type of
> @@ -8633,6 +8639,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>  				verbose(env, "arg#%d expected pointer to ctx, but got %s\n", i, btf_type_str(t));
>  				return -EINVAL;
>  			}
> +
> +			if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
> +				ret = get_kern_ctx_btf_id(&env->log, resolve_prog_type(env->prog));
> +				if (ret < 0)
> +					return -EINVAL;
> +				meta->arg_constant.value = ret;

It's not an arg. So 'arg_constant' doesn't fit.
No need to save every byte in bpf_kfunc_call_arg_meta.
Let's add new filed like 'ret_btf_id'.

> +			}
>  			break;
>  		case KF_ARG_PTR_TO_ALLOC_BTF_ID:
>  			if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
> @@ -8880,6 +8893,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  				regs[BPF_REG_0].btf = field->list_head.btf;
>  				regs[BPF_REG_0].btf_id = field->list_head.value_btf_id;
>  				regs[BPF_REG_0].off = field->list_head.node_offset;
> +			} else if (meta.func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
> +				mark_reg_known_zero(env, regs, BPF_REG_0);
> +				regs[BPF_REG_0].type = PTR_TO_BTF_ID;

Let's use PTR_TO_BTF_ID | PTR_TRUSTED here.
PTR_TRUSTED was just recently added (hours ago :)
With that bpf_cast_to_kern_ctx() will return trusted pointer and we will be able
to pass it to kfuncs and helpers that expect valid args.

> +				regs[BPF_REG_0].btf = desc_btf;
> +				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
>  			} else {
>  				verbose(env, "kernel function %s unhandled dynamic return type\n",
>  					meta.func_name);
> @@ -15130,6 +15148,9 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  		insn_buf[1] = addr[1];
>  		insn_buf[2] = *insn;
>  		*cnt = 3;
> +	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
> +		insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
> +		*cnt = 1;

Nice! Important optimization.
I guess we still need:
 +void *bpf_cast_to_kern_ctx(void *obj)
 +{
 +     return obj;
 +}
otherwise resolve_btfids will be confused?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 3/4] bpf: Add a kfunc for generic type cast
  2022-11-20 16:15 ` [PATCH bpf-next v2 3/4] bpf: Add a kfunc for generic type cast Yonghong Song
@ 2022-11-20 18:42   ` Alexei Starovoitov
  0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2022-11-20 18:42 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau

On Sun, Nov 20, 2022 at 08:15:27AM -0800, Yonghong Song wrote:
> Implement bpf_rdonly_cast() which tries to cast the object
> to a specified type. This tries to support use case like below:
>   #define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB)))
> where skb_end_pointer(SKB) is a 'unsigned char *' and needs to
> be casted to 'struct skb_shared_info *'.
> 
> The signature of bpf_rdonly_cast() looks like
>    void *bpf_rdonly_cast(void *obj, __u32 btf_id)
> The function returns the same 'obj' but with PTR_TO_BTF_ID with
> btf_id. The verifier will ensure btf_id being a struct type.
> 
> Since the supported type cast may not reflect what the 'obj'
> represents, the returned btf_id is marked as PTR_UNTRUSTED, so
> the return value and subsequent pointer chasing cannot be
> used as helper/kfunc arguments.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  kernel/bpf/helpers.c  |  6 ++++++
>  kernel/bpf/verifier.c | 26 ++++++++++++++++++++++++--
>  2 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index dc6e994feeb9..9d9b91d2d047 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1829,6 +1829,11 @@ void *bpf_cast_to_kern_ctx(void *obj)
>  	return obj;
>  }
>  
> +void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k)
> +{
> +	return obj__ign;
> +}
> +
>  __diag_pop();
>  
>  BTF_SET8_START(generic_btf_ids)
> @@ -1850,6 +1855,7 @@ static const struct btf_kfunc_id_set generic_kfunc_set = {
>  
>  BTF_SET8_START(common_btf_ids)
>  BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
> +BTF_ID_FLAGS(func, bpf_rdonly_cast)
>  BTF_SET8_END(common_btf_ids)
>  
>  static const struct btf_kfunc_id_set common_kfunc_set = {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a18b519c5225..3f1094efdb04 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -8119,6 +8119,7 @@ enum special_kfunc_type {
>  	KF_bpf_list_pop_front,
>  	KF_bpf_list_pop_back,
>  	KF_bpf_cast_to_kern_ctx,
> +	KF_bpf_rdonly_cast,
>  };
>  
>  BTF_SET_START(special_kfunc_set)
> @@ -8129,6 +8130,7 @@ BTF_ID(func, bpf_list_push_back)
>  BTF_ID(func, bpf_list_pop_front)
>  BTF_ID(func, bpf_list_pop_back)
>  BTF_ID(func, bpf_cast_to_kern_ctx)
> +BTF_ID(func, bpf_rdonly_cast)
>  BTF_SET_END(special_kfunc_set)
>  
>  BTF_ID_LIST(special_kfunc_list)
> @@ -8139,6 +8141,7 @@ BTF_ID(func, bpf_list_push_back)
>  BTF_ID(func, bpf_list_pop_front)
>  BTF_ID(func, bpf_list_pop_back)
>  BTF_ID(func, bpf_cast_to_kern_ctx)
> +BTF_ID(func, bpf_rdonly_cast)
>  
>  static enum kfunc_ptr_arg_type
>  get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
> @@ -8769,6 +8772,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  	u32 i, nargs, func_id, ptr_type_id;
>  	int err, insn_idx = *insn_idx_p;
>  	const struct btf_param *args;
> +	const struct btf_type *ret_t;
>  	struct btf *desc_btf;
>  	u32 *kfunc_flags;
>  
> @@ -8848,7 +8852,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  
>  		if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) {
>  			if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
> -				const struct btf_type *ret_t;
>  				struct btf *ret_btf;
>  				u32 ret_btf_id;
>  
> @@ -8898,6 +8901,24 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  				regs[BPF_REG_0].type = PTR_TO_BTF_ID;
>  				regs[BPF_REG_0].btf = desc_btf;
>  				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
> +			} else if (meta.func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
> +				if (!capable(CAP_PERFMON)) {
> +					verbose(env,
> +						"kfunc bpf_rdonly_cast requires CAP_PERFMON capability\n");
> +					return -EACCES;
> +				}
> +
> +				ret_t = btf_type_by_id(desc_btf, meta.arg_constant.value);
> +				if (!ret_t || !btf_type_is_struct(ret_t)) {
> +					verbose(env,
> +						"kfunc bpf_rdonly_cast type ID argument must be of a struct\n");
> +					return -EINVAL;
> +				}
> +
> +				mark_reg_known_zero(env, regs, BPF_REG_0);
> +				regs[BPF_REG_0].type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
> +				regs[BPF_REG_0].btf = desc_btf;
> +				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
>  			} else {
>  				verbose(env, "kernel function %s unhandled dynamic return type\n",
>  					meta.func_name);
> @@ -15148,7 +15169,8 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  		insn_buf[1] = addr[1];
>  		insn_buf[2] = *insn;
>  		*cnt = 3;
> -	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
> +	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
> +		   desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
>  		insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
>  		*cnt = 1;

Nice!
After kfunc refactoring adding new special kfunc looks so clean and easy to review.

I was contemplating to suggest to replace "__k" with "__btf_struct"
to have a single place that checks for btf_type_is_struct(),
but then realized that bpf_obj_new needs prog's btf_id
while bpf_rdonly_cast needs vmlinux's btf_id.
So let's keep __k for now.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx
  2022-11-20 18:33   ` Alexei Starovoitov
@ 2022-11-20 18:55     ` Yonghong Song
  2022-11-20 19:10       ` Alexei Starovoitov
  0 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 18:55 UTC (permalink / raw)
  To: Alexei Starovoitov, Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	John Fastabend, kernel-team, Kumar Kartikeya Dwivedi,
	Martin KaFai Lau



On 11/20/22 10:33 AM, Alexei Starovoitov wrote:
> On Sun, Nov 20, 2022 at 08:15:22AM -0800, Yonghong Song wrote:
>> Implement bpf_cast_to_kern_ctx() kfunc which does a type cast
>> of a uapi ctx object to the corresponding kernel ctx. Previously
>> if users want to access some data available in kctx but not
>> in uapi ctx, bpf_probe_read_kernel() helper is needed.
>> The introduction of bpf_cast_to_kern_ctx() allows direct
>> memory access which makes code simpler and easier to understand.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   include/linux/btf.h   |  5 +++++
>>   kernel/bpf/btf.c      | 25 +++++++++++++++++++++++++
>>   kernel/bpf/helpers.c  |  6 ++++++
>>   kernel/bpf/verifier.c | 21 +++++++++++++++++++++
>>   4 files changed, 57 insertions(+)
>>
>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>> index d5b26380a60f..4b5d799f5d02 100644
>> --- a/include/linux/btf.h
>> +++ b/include/linux/btf.h
>> @@ -470,6 +470,7 @@ const struct btf_member *
>>   btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>>   		      const struct btf_type *t, enum bpf_prog_type prog_type,
>>   		      int arg);
>> +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type);
>>   bool btf_types_are_same(const struct btf *btf1, u32 id1,
>>   			const struct btf *btf2, u32 id2);
>>   #else
>> @@ -514,6 +515,10 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>>   {
>>   	return NULL;
>>   }
>> +static inline int get_kern_ctx_btf_id(struct bpf_verifier_log *log,
>> +				      enum bpf_prog_type prog_type) {
>> +	return -EINVAL;
>> +}
>>   static inline bool btf_types_are_same(const struct btf *btf1, u32 id1,
>>   				      const struct btf *btf2, u32 id2)
>>   {
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index 0a3abbe56c5d..bef1b6cfe6b8 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -5603,6 +5603,31 @@ static int btf_translate_to_vmlinux(struct bpf_verifier_log *log,
>>   	return kern_ctx_type->type;
>>   }
>>   
>> +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type)
>> +{
>> +	const struct btf_member *kctx_member;
>> +	const struct btf_type *conv_struct;
>> +	const struct btf_type *kctx_type;
>> +	u32 kctx_type_id;
>> +
>> +	conv_struct = bpf_ctx_convert.t;
>> +	if (!conv_struct) {
>> +		bpf_log(log, "btf_vmlinux is malformed\n");
>> +		return -EINVAL;
>> +	}
> 
> If we get to this point this internal pointer would be already checked.
> No need to check it again. Just use it.

This is probably not true.

Currently, conv_struct is tested in function btf_get_prog_ctx_type() 
which is called by get_kfunc_ptr_arg_type().

const struct btf_member *
btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
                       const struct btf_type *t, enum bpf_prog_type 
prog_type,
                       int arg)
{
         const struct btf_type *conv_struct;
         const struct btf_type *ctx_struct;
         const struct btf_member *ctx_type;
         const char *tname, *ctx_tname;

         conv_struct = bpf_ctx_convert.t;
         if (!conv_struct) {
                 bpf_log(log, "btf_vmlinux is malformed\n");
                 return NULL;
         }
	...
}

In get_kfunc_ptr_arg_type(),

...

         /* In this function, we verify the kfunc's BTF as per the 
argument type,
          * leaving the rest of the verification with respect to the 
register
          * type to our caller. When a set of conditions hold in the BTF 
type of
          * arguments, we resolve it to a known kfunc_ptr_arg_type.
          */
         if (btf_get_prog_ctx_type(&env->log, meta->btf, t, 
resolve_prog_type(env->prog), argno))
                 return KF_ARG_PTR_TO_CTX;

Note that if bpf_ctx_convert.t is NULL, btf_get_prog_ctx_type() simply
returns NULL and the logic simply follows through.

Should we actually add a NULL checking for bpf_ctx_convert.t in
bpf_parse_vmlinux?

...
         err = btf_check_type_tags(env, btf, 1);
         if (err)
                 goto errout;

         /* btf_parse_vmlinux() runs under bpf_verifier_lock */
         bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);

         bpf_struct_ops_init(btf, log);
...


> 
>> +
>> +	/* get member for kernel ctx type */
>> +	kctx_member = btf_type_member(conv_struct) + bpf_ctx_convert_map[prog_type] * 2 + 1;
>> +	kctx_type_id = kctx_member->type;
>> +	kctx_type = btf_type_by_id(btf_vmlinux, kctx_type_id);
>> +	if (!btf_type_is_struct(kctx_type)) {
>> +		bpf_log(log, "kern ctx type id %u is not a struct\n", kctx_type_id);
>> +		return -EINVAL;
>> +	}
>> +
>> +	return kctx_type_id;
>> +}
>> +
>>   BTF_ID_LIST(bpf_ctx_convert_btf_id)
>>   BTF_ID(struct, bpf_ctx_convert)
>>   
>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
>> index eaae7f474eda..dc6e994feeb9 100644
>> --- a/kernel/bpf/helpers.c
>> +++ b/kernel/bpf/helpers.c
>> @@ -1824,6 +1824,11 @@ struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
>>   	return __bpf_list_del(head, true);
>>   }
>>   
>> +void *bpf_cast_to_kern_ctx(void *obj)
>> +{
>> +	return obj;
>> +}
>> +
>>   __diag_pop();
>>   
>>   BTF_SET8_START(generic_btf_ids)
>> @@ -1844,6 +1849,7 @@ static const struct btf_kfunc_id_set generic_kfunc_set = {
>>   };
>>   
>>   BTF_SET8_START(common_btf_ids)
>> +BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
>>   BTF_SET8_END(common_btf_ids)
>>   
>>   static const struct btf_kfunc_id_set common_kfunc_set = {
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 195d24316750..a18b519c5225 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -8118,6 +8118,7 @@ enum special_kfunc_type {
>>   	KF_bpf_list_push_back,
>>   	KF_bpf_list_pop_front,
>>   	KF_bpf_list_pop_back,
>> +	KF_bpf_cast_to_kern_ctx,
>>   };
>>   
>>   BTF_SET_START(special_kfunc_set)
>> @@ -8127,6 +8128,7 @@ BTF_ID(func, bpf_list_push_front)
>>   BTF_ID(func, bpf_list_push_back)
>>   BTF_ID(func, bpf_list_pop_front)
>>   BTF_ID(func, bpf_list_pop_back)
>> +BTF_ID(func, bpf_cast_to_kern_ctx)
>>   BTF_SET_END(special_kfunc_set)
>>   
>>   BTF_ID_LIST(special_kfunc_list)
>> @@ -8136,6 +8138,7 @@ BTF_ID(func, bpf_list_push_front)
>>   BTF_ID(func, bpf_list_push_back)
>>   BTF_ID(func, bpf_list_pop_front)
>>   BTF_ID(func, bpf_list_pop_back)
>> +BTF_ID(func, bpf_cast_to_kern_ctx)
>>   
>>   static enum kfunc_ptr_arg_type
>>   get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
>> @@ -8149,6 +8152,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
>>   	struct bpf_reg_state *reg = &regs[regno];
>>   	bool arg_mem_size = false;
>>   
>> +	if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx])
>> +		return KF_ARG_PTR_TO_CTX;
>> +
>>   	/* In this function, we verify the kfunc's BTF as per the argument type,
>>   	 * leaving the rest of the verification with respect to the register
>>   	 * type to our caller. When a set of conditions hold in the BTF type of
>> @@ -8633,6 +8639,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>>   				verbose(env, "arg#%d expected pointer to ctx, but got %s\n", i, btf_type_str(t));
>>   				return -EINVAL;
>>   			}
>> +
>> +			if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
>> +				ret = get_kern_ctx_btf_id(&env->log, resolve_prog_type(env->prog));
>> +				if (ret < 0)
>> +					return -EINVAL;
>> +				meta->arg_constant.value = ret;
> 
> It's not an arg. So 'arg_constant' doesn't fit.
> No need to save every byte in bpf_kfunc_call_arg_meta.
> Let's add new filed like 'ret_btf_id'.

Okay, I can do that.

> 
>> +			}
>>   			break;
>>   		case KF_ARG_PTR_TO_ALLOC_BTF_ID:
>>   			if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
>> @@ -8880,6 +8893,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>>   				regs[BPF_REG_0].btf = field->list_head.btf;
>>   				regs[BPF_REG_0].btf_id = field->list_head.value_btf_id;
>>   				regs[BPF_REG_0].off = field->list_head.node_offset;
>> +			} else if (meta.func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
>> +				mark_reg_known_zero(env, regs, BPF_REG_0);
>> +				regs[BPF_REG_0].type = PTR_TO_BTF_ID;
> 
> Let's use PTR_TO_BTF_ID | PTR_TRUSTED here.
> PTR_TRUSTED was just recently added (hours ago :)
> With that bpf_cast_to_kern_ctx() will return trusted pointer and we will be able
> to pass it to kfuncs and helpers that expect valid args.

Right, will add PTR_TRUSTED in the next revision.

> 
>> +				regs[BPF_REG_0].btf = desc_btf;
>> +				regs[BPF_REG_0].btf_id = meta.arg_constant.value;
>>   			} else {
>>   				verbose(env, "kernel function %s unhandled dynamic return type\n",
>>   					meta.func_name);
>> @@ -15130,6 +15148,9 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>>   		insn_buf[1] = addr[1];
>>   		insn_buf[2] = *insn;
>>   		*cnt = 3;
>> +	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx]) {
>> +		insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
>> +		*cnt = 1;
> 
> Nice! Important optimization.
> I guess we still need:
>   +void *bpf_cast_to_kern_ctx(void *obj)
>   +{
>   +     return obj;
>   +}
> otherwise resolve_btfids will be confused?

Right, we still need the above function definition so resolve_btfids can 
properly populate kfunc id for verification purpose.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx
  2022-11-20 18:55     ` Yonghong Song
@ 2022-11-20 19:10       ` Alexei Starovoitov
  2022-11-20 19:24         ` Yonghong Song
  0 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2022-11-20 19:10 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, John Fastabend, kernel-team,
	Kumar Kartikeya Dwivedi, Martin KaFai Lau

On Sun, Nov 20, 2022 at 10:55:54AM -0800, Yonghong Song wrote:
> 
> 
> On 11/20/22 10:33 AM, Alexei Starovoitov wrote:
> > On Sun, Nov 20, 2022 at 08:15:22AM -0800, Yonghong Song wrote:
> > > Implement bpf_cast_to_kern_ctx() kfunc which does a type cast
> > > of a uapi ctx object to the corresponding kernel ctx. Previously
> > > if users want to access some data available in kctx but not
> > > in uapi ctx, bpf_probe_read_kernel() helper is needed.
> > > The introduction of bpf_cast_to_kern_ctx() allows direct
> > > memory access which makes code simpler and easier to understand.
> > > 
> > > Signed-off-by: Yonghong Song <yhs@fb.com>
> > > ---
> > >   include/linux/btf.h   |  5 +++++
> > >   kernel/bpf/btf.c      | 25 +++++++++++++++++++++++++
> > >   kernel/bpf/helpers.c  |  6 ++++++
> > >   kernel/bpf/verifier.c | 21 +++++++++++++++++++++
> > >   4 files changed, 57 insertions(+)
> > > 
> > > diff --git a/include/linux/btf.h b/include/linux/btf.h
> > > index d5b26380a60f..4b5d799f5d02 100644
> > > --- a/include/linux/btf.h
> > > +++ b/include/linux/btf.h
> > > @@ -470,6 +470,7 @@ const struct btf_member *
> > >   btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
> > >   		      const struct btf_type *t, enum bpf_prog_type prog_type,
> > >   		      int arg);
> > > +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type);
> > >   bool btf_types_are_same(const struct btf *btf1, u32 id1,
> > >   			const struct btf *btf2, u32 id2);
> > >   #else
> > > @@ -514,6 +515,10 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
> > >   {
> > >   	return NULL;
> > >   }
> > > +static inline int get_kern_ctx_btf_id(struct bpf_verifier_log *log,
> > > +				      enum bpf_prog_type prog_type) {
> > > +	return -EINVAL;
> > > +}
> > >   static inline bool btf_types_are_same(const struct btf *btf1, u32 id1,
> > >   				      const struct btf *btf2, u32 id2)
> > >   {
> > > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > > index 0a3abbe56c5d..bef1b6cfe6b8 100644
> > > --- a/kernel/bpf/btf.c
> > > +++ b/kernel/bpf/btf.c
> > > @@ -5603,6 +5603,31 @@ static int btf_translate_to_vmlinux(struct bpf_verifier_log *log,
> > >   	return kern_ctx_type->type;
> > >   }
> > > +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type)
> > > +{
> > > +	const struct btf_member *kctx_member;
> > > +	const struct btf_type *conv_struct;
> > > +	const struct btf_type *kctx_type;
> > > +	u32 kctx_type_id;
> > > +
> > > +	conv_struct = bpf_ctx_convert.t;
> > > +	if (!conv_struct) {
> > > +		bpf_log(log, "btf_vmlinux is malformed\n");
> > > +		return -EINVAL;
> > > +	}
> > 
> > If we get to this point this internal pointer would be already checked.
> > No need to check it again. Just use it.
> 
> This is probably not true.
> 
> Currently, conv_struct is tested in function btf_get_prog_ctx_type() which
> is called by get_kfunc_ptr_arg_type().
> 
> const struct btf_member *
> btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>                       const struct btf_type *t, enum bpf_prog_type
> prog_type,
>                       int arg)
> {
>         const struct btf_type *conv_struct;
>         const struct btf_type *ctx_struct;
>         const struct btf_member *ctx_type;
>         const char *tname, *ctx_tname;
> 
>         conv_struct = bpf_ctx_convert.t;
>         if (!conv_struct) {
>                 bpf_log(log, "btf_vmlinux is malformed\n");
>                 return NULL;
>         }
> 	...
> }
> 
> In get_kfunc_ptr_arg_type(),
> 
> ...
> 
>         /* In this function, we verify the kfunc's BTF as per the argument
> type,
>          * leaving the rest of the verification with respect to the register
>          * type to our caller. When a set of conditions hold in the BTF type
> of
>          * arguments, we resolve it to a known kfunc_ptr_arg_type.
>          */
>         if (btf_get_prog_ctx_type(&env->log, meta->btf, t,
> resolve_prog_type(env->prog), argno))
>                 return KF_ARG_PTR_TO_CTX;
> 
> Note that if bpf_ctx_convert.t is NULL, btf_get_prog_ctx_type() simply
> returns NULL and the logic simply follows through.

Right. It will return NULL and the code further won't see KF_ARG_PTR_TO_CTX
and will not call get_kern_ctx_btf_id().
So it still looks to me that the check can be dropped.

> Should we actually add a NULL checking for bpf_ctx_convert.t in
> bpf_parse_vmlinux?

Ideally yes, but right now CONFIG_DEBUG_INFO_BTF can be enabled
independently and I'm afraid btf_get_prog_ctx_type() can be called
via btf_translate_to_vmlinux() even when btf_vmlinux == NULL.
So bpf_ctx_convert.t == NULL at that point
because btf_parse_vmlinux wasn't called.

> 
> ...
>         err = btf_check_type_tags(env, btf, 1);
>         if (err)
>                 goto errout;
> 
>         /* btf_parse_vmlinux() runs under bpf_verifier_lock */
>         bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
> 
>         bpf_struct_ops_init(btf, log);
> ...
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx
  2022-11-20 19:10       ` Alexei Starovoitov
@ 2022-11-20 19:24         ` Yonghong Song
  0 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2022-11-20 19:24 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, John Fastabend, kernel-team,
	Kumar Kartikeya Dwivedi, Martin KaFai Lau



On 11/20/22 11:10 AM, Alexei Starovoitov wrote:
> On Sun, Nov 20, 2022 at 10:55:54AM -0800, Yonghong Song wrote:
>>
>>
>> On 11/20/22 10:33 AM, Alexei Starovoitov wrote:
>>> On Sun, Nov 20, 2022 at 08:15:22AM -0800, Yonghong Song wrote:
>>>> Implement bpf_cast_to_kern_ctx() kfunc which does a type cast
>>>> of a uapi ctx object to the corresponding kernel ctx. Previously
>>>> if users want to access some data available in kctx but not
>>>> in uapi ctx, bpf_probe_read_kernel() helper is needed.
>>>> The introduction of bpf_cast_to_kern_ctx() allows direct
>>>> memory access which makes code simpler and easier to understand.
>>>>
>>>> Signed-off-by: Yonghong Song <yhs@fb.com>
>>>> ---
>>>>    include/linux/btf.h   |  5 +++++
>>>>    kernel/bpf/btf.c      | 25 +++++++++++++++++++++++++
>>>>    kernel/bpf/helpers.c  |  6 ++++++
>>>>    kernel/bpf/verifier.c | 21 +++++++++++++++++++++
>>>>    4 files changed, 57 insertions(+)
>>>>
>>>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>>>> index d5b26380a60f..4b5d799f5d02 100644
>>>> --- a/include/linux/btf.h
>>>> +++ b/include/linux/btf.h
>>>> @@ -470,6 +470,7 @@ const struct btf_member *
>>>>    btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>>>>    		      const struct btf_type *t, enum bpf_prog_type prog_type,
>>>>    		      int arg);
>>>> +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type);
>>>>    bool btf_types_are_same(const struct btf *btf1, u32 id1,
>>>>    			const struct btf *btf2, u32 id2);
>>>>    #else
>>>> @@ -514,6 +515,10 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>>>>    {
>>>>    	return NULL;
>>>>    }
>>>> +static inline int get_kern_ctx_btf_id(struct bpf_verifier_log *log,
>>>> +				      enum bpf_prog_type prog_type) {
>>>> +	return -EINVAL;
>>>> +}
>>>>    static inline bool btf_types_are_same(const struct btf *btf1, u32 id1,
>>>>    				      const struct btf *btf2, u32 id2)
>>>>    {
>>>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>>>> index 0a3abbe56c5d..bef1b6cfe6b8 100644
>>>> --- a/kernel/bpf/btf.c
>>>> +++ b/kernel/bpf/btf.c
>>>> @@ -5603,6 +5603,31 @@ static int btf_translate_to_vmlinux(struct bpf_verifier_log *log,
>>>>    	return kern_ctx_type->type;
>>>>    }
>>>> +int get_kern_ctx_btf_id(struct bpf_verifier_log *log, enum bpf_prog_type prog_type)
>>>> +{
>>>> +	const struct btf_member *kctx_member;
>>>> +	const struct btf_type *conv_struct;
>>>> +	const struct btf_type *kctx_type;
>>>> +	u32 kctx_type_id;
>>>> +
>>>> +	conv_struct = bpf_ctx_convert.t;
>>>> +	if (!conv_struct) {
>>>> +		bpf_log(log, "btf_vmlinux is malformed\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>
>>> If we get to this point this internal pointer would be already checked.
>>> No need to check it again. Just use it.
>>
>> This is probably not true.
>>
>> Currently, conv_struct is tested in function btf_get_prog_ctx_type() which
>> is called by get_kfunc_ptr_arg_type().
>>
>> const struct btf_member *
>> btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
>>                        const struct btf_type *t, enum bpf_prog_type
>> prog_type,
>>                        int arg)
>> {
>>          const struct btf_type *conv_struct;
>>          const struct btf_type *ctx_struct;
>>          const struct btf_member *ctx_type;
>>          const char *tname, *ctx_tname;
>>
>>          conv_struct = bpf_ctx_convert.t;
>>          if (!conv_struct) {
>>                  bpf_log(log, "btf_vmlinux is malformed\n");
>>                  return NULL;
>>          }
>> 	...
>> }
>>
>> In get_kfunc_ptr_arg_type(),
>>
>> ...
>>
>>          /* In this function, we verify the kfunc's BTF as per the argument
>> type,
>>           * leaving the rest of the verification with respect to the register
>>           * type to our caller. When a set of conditions hold in the BTF type
>> of
>>           * arguments, we resolve it to a known kfunc_ptr_arg_type.
>>           */
>>          if (btf_get_prog_ctx_type(&env->log, meta->btf, t,
>> resolve_prog_type(env->prog), argno))
>>                  return KF_ARG_PTR_TO_CTX;
>>
>> Note that if bpf_ctx_convert.t is NULL, btf_get_prog_ctx_type() simply
>> returns NULL and the logic simply follows through.
> 
> Right. It will return NULL and the code further won't see KF_ARG_PTR_TO_CTX
> and will not call get_kern_ctx_btf_id().
> So it still looks to me that the check can be dropped.
> 
>> Should we actually add a NULL checking for bpf_ctx_convert.t in
>> bpf_parse_vmlinux?
> 
> Ideally yes, but right now CONFIG_DEBUG_INFO_BTF can be enabled
> independently and I'm afraid btf_get_prog_ctx_type() can be called
> via btf_translate_to_vmlinux() even when btf_vmlinux == NULL.
> So bpf_ctx_convert.t == NULL at that point
> because btf_parse_vmlinux wasn't called.

Okay, I see. btf_get_prog_ctx_type() could be called even when vmlinux 
btf is not parsed yet. But get_kfunc_ptr_arg_type() should already
have vmlinux parsed properly. So for bpf_cast_to_kern_ctx handling,
bpf_ctx_convert.t should not be NULL.

Will drop the error checking in the next revision.

> 
>>
>> ...
>>          err = btf_check_type_tags(env, btf, 1);
>>          if (err)
>>                  goto errout;
>>
>>          /* btf_parse_vmlinux() runs under bpf_verifier_lock */
>>          bpf_ctx_convert.t = btf_type_by_id(btf, bpf_ctx_convert_btf_id[0]);
>>
>>          bpf_struct_ops_init(btf, log);
>> ...
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-11-20 19:25 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-20 16:15 [PATCH bpf-next v2 0/4] bpf: Implement two type cast kfuncs Yonghong Song
2022-11-20 16:15 ` [PATCH bpf-next v2 1/4] bpf: Add support for kfunc set with common btf_ids Yonghong Song
2022-11-20 16:15 ` [PATCH bpf-next v2 2/4] bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx Yonghong Song
2022-11-20 18:33   ` Alexei Starovoitov
2022-11-20 18:55     ` Yonghong Song
2022-11-20 19:10       ` Alexei Starovoitov
2022-11-20 19:24         ` Yonghong Song
2022-11-20 16:15 ` [PATCH bpf-next v2 3/4] bpf: Add a kfunc for generic type cast Yonghong Song
2022-11-20 18:42   ` Alexei Starovoitov
2022-11-20 16:15 ` [PATCH bpf-next v2 4/4] bpf: Add type cast unit tests Yonghong Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.