All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC bpf-next 0/1] bpf: Support 64-bit pointers to kfuncs
@ 2023-02-14 21:28 Ilya Leoshkevich
  2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
  0 siblings, 1 reply; 12+ messages in thread
From: Ilya Leoshkevich @ 2023-02-14 21:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Ilya Leoshkevich

Hi,

This is the first attempt to solve the problems outlined in [1, 2, 3].
The main problem is that kfuncs in modules do not fit into bpf_insn.imm
on s390x; the secondary problem is that there is a conflict between
"abstract" XDP metadata function BTF ids and their "concrete"
addresses.

The proposed solution is to keep fkunc BTF ids in bpf_insn.imm, and put
the addresses into bpf_kfunc_desc, which does not have size
restrictions.

Jiri and I discussed putting them into bpf_insn_aux_data, but at the
very beginning of the implementation it became clear that this struct
is freed at the end of verification and is not available during JITing.

I regtested this only on s390x, where it does not regress anything. If
this approach is fine (especially w.r.t. dev-bound kfuncs), I can check
the other arches.

[1] https://lore.kernel.org/bpf/Y9%2FyrKZkBK6yzXp+@krava/
[2] https://lore.kernel.org/bpf/20230128000650.1516334-1-iii@linux.ibm.com/
[3] https://lore.kernel.org/bpf/20230128000650.1516334-32-iii@linux.ibm.com/

Best regards,
Ilya

Ilya Leoshkevich (1):
  bpf: Support 64-bit pointers to kfuncs

 include/linux/bpf.h   |  2 ++
 kernel/bpf/core.c     | 23 ++++++++++---
 kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
 3 files changed, 45 insertions(+), 59 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-14 21:28 [PATCH RFC bpf-next 0/1] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-14 21:28 ` Ilya Leoshkevich
  2023-02-14 23:14   ` kernel test robot
                     ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Ilya Leoshkevich @ 2023-02-14 21:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Ilya Leoshkevich

test_ksyms_module fails to emit a kfunc call targeting a module on
s390x, because the verifier stores the difference between kfunc
address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
are roughly (1 << 42) bytes away from the kernel on s390x.

Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
as usual by calling bpf_jit_get_func_addr().

This also fixes the problem with XDP metadata functions outlined in
the description of commit 63d7b53ab59f ("s390/bpf: Implement
bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
id lookups. This eliminates the inconsistency between "abstract" XDP
metadata functions' BTF ids and their concrete addresses.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/linux/bpf.h   |  2 ++
 kernel/bpf/core.c     | 23 ++++++++++---
 kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
 3 files changed, 45 insertions(+), 59 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index be34f7deb6c3..83ce94d11484 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
 const struct btf_func_model *
 bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn);
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr);
 struct bpf_core_ctx {
 	struct bpf_verifier_log *log;
 	const struct btf *btf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 3390961c4e10..a42382afe333 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 {
 	s16 off = insn->off;
 	s32 imm = insn->imm;
+	bool fixed;
 	u8 *addr;
+	int err;
 
-	*func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
-	if (!*func_addr_fixed) {
+	switch (insn->src_reg) {
+	case BPF_PSEUDO_CALL:
 		/* Place-holder address till the last pass has collected
 		 * all addresses for JITed subprograms in which case we
 		 * can pick them up from prog->aux.
@@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 			addr = (u8 *)prog->aux->func[off]->bpf_func;
 		else
 			return -EINVAL;
-	} else {
+		fixed = false;
+		break;
+	case 0:
 		/* Address of a BPF helper call. Since part of the core
 		 * kernel, it's always at a fixed location. __bpf_call_base
 		 * and the helper with imm relative to it are both in core
 		 * kernel.
 		 */
 		addr = (u8 *)__bpf_call_base + imm;
+		fixed = true;
+		break;
+	case BPF_PSEUDO_KFUNC_CALL:
+		err = bpf_get_kfunc_addr(prog, imm, off, &addr);
+		if (err)
+			return err;
+		fixed = true;
+		break;
+	default:
+		return -EINVAL;
 	}
 
-	*func_addr = (unsigned long)addr;
+	*func_addr_fixed = fixed;
+	*func_addr = addr;
 	return 0;
 }
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 21e08c111702..aea59974f0d6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
 struct bpf_kfunc_desc {
 	struct btf_func_model func_model;
 	u32 func_id;
-	s32 imm;
 	u16 offset;
+	unsigned long addr;
 };
 
 struct bpf_kfunc_btf {
@@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
 		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 }
 
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr)
+{
+	const struct bpf_kfunc_desc *desc;
+
+	desc = find_kfunc_desc(prog, func_id, offset);
+	if (WARN_ON_ONCE(!desc))
+		return -EINVAL;
+
+	*func_addr = (u8 *)desc->addr;
+	return 0;
+}
+
 static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
 					 s16 offset)
 {
@@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
 	struct btf *desc_btf;
-	unsigned long call_imm;
 	unsigned long addr;
+	void *xdp_kfunc;
 	int err;
 
 	prog_aux = env->prog->aux;
@@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
-	call_imm = BPF_CALL_IMM(addr);
-	/* Check whether or not the relative offset overflows desc->imm */
-	if ((unsigned long)(s32)call_imm != call_imm) {
-		verbose(env, "address of kernel function %s is out of range\n",
-			func_name);
-		return -EINVAL;
-	}
-
 	if (bpf_dev_bound_kfunc_id(func_id)) {
 		err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
 		if (err)
 			return err;
+
+		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
+		if (xdp_kfunc)
+			addr = (unsigned long)xdp_kfunc;
+		/* fallback to default kfunc when not supported by netdev */
 	}
 
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
-	desc->imm = call_imm;
 	desc->offset = offset;
+	desc->addr = addr;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
@@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	return err;
 }
 
-static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
-{
-	const struct bpf_kfunc_desc *d0 = a;
-	const struct bpf_kfunc_desc *d1 = b;
-
-	if (d0->imm > d1->imm)
-		return 1;
-	else if (d0->imm < d1->imm)
-		return -1;
-	return 0;
-}
-
-static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
-{
-	struct bpf_kfunc_desc_tab *tab;
-
-	tab = prog->aux->kfunc_tab;
-	if (!tab)
-		return;
-
-	sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
-	     kfunc_desc_cmp_by_imm, NULL);
-}
-
 bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
 {
 	return !!prog->aux->kfunc_tab;
@@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn)
 {
 	const struct bpf_kfunc_desc desc = {
-		.imm = insn->imm,
+		.func_id = insn->imm,
+		.offset = insn->off,
 	};
 	const struct bpf_kfunc_desc *res;
 	struct bpf_kfunc_desc_tab *tab;
 
 	tab = prog->aux->kfunc_tab;
 	res = bsearch(&desc, tab->descs, tab->nr_descs,
-		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
+		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 
 	return res ? &res->func_model : NULL;
 }
@@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			    struct bpf_insn *insn_buf, int insn_idx, int *cnt)
 {
 	const struct bpf_kfunc_desc *desc;
-	void *xdp_kfunc;
 
 	if (!insn->imm) {
 		verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
@@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	}
 
 	*cnt = 0;
-
-	if (bpf_dev_bound_kfunc_id(insn->imm)) {
-		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
-		if (xdp_kfunc) {
-			insn->imm = BPF_CALL_IMM(xdp_kfunc);
-			return 0;
-		}
-
-		/* fallback to default kfunc when not supported by netdev */
-	}
-
-	/* insn->imm has the btf func_id. Replace it with
-	 * an address (relative to __bpf_call_base).
-	 */
 	desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
 	if (!desc) {
 		verbose(env, "verifier internal error: kernel function descriptor not found for func_id %u\n",
@@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		return -EFAULT;
 	}
 
-	insn->imm = desc->imm;
 	if (insn->off)
 		return 0;
 	if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
@@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		}
 	}
 
-	sort_kfunc_descs_by_imm(env->prog);
-
 	return 0;
 }
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
@ 2023-02-14 23:14   ` kernel test robot
  2023-02-14 23:58   ` Stanislav Fomichev
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-02-14 23:14 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: oe-kbuild-all

Hi Ilya,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Ilya-Leoshkevich/bpf-Support-64-bit-pointers-to-kfuncs/20230215-054702
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230214212809.242632-2-iii%40linux.ibm.com
patch subject: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
config: loongarch-allyesconfig (https://download.01.org/0day-ci/archive/20230215/202302150614.eexWgsVE-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/5b93c73d640845b2e5ddd1b4af608f4896379002
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Ilya-Leoshkevich/bpf-Support-64-bit-pointers-to-kfuncs/20230215-054702
        git checkout 5b93c73d640845b2e5ddd1b4af608f4896379002
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash kernel/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202302150614.eexWgsVE-lkp@intel.com/

All warnings (new ones prefixed by >>):

   kernel/bpf/core.c: In function 'bpf_jit_get_func_addr':
>> kernel/bpf/core.c:1227:20: warning: assignment to 'u64' {aka 'long long unsigned int'} from 'u8 *' {aka 'unsigned char *'} makes integer from pointer without a cast [-Wint-conversion]
    1227 |         *func_addr = addr;
         |                    ^


vim +1227 kernel/bpf/core.c

  1181	
  1182	int bpf_jit_get_func_addr(const struct bpf_prog *prog,
  1183				  const struct bpf_insn *insn, bool extra_pass,
  1184				  u64 *func_addr, bool *func_addr_fixed)
  1185	{
  1186		s16 off = insn->off;
  1187		s32 imm = insn->imm;
  1188		bool fixed;
  1189		u8 *addr;
  1190		int err;
  1191	
  1192		switch (insn->src_reg) {
  1193		case BPF_PSEUDO_CALL:
  1194			/* Place-holder address till the last pass has collected
  1195			 * all addresses for JITed subprograms in which case we
  1196			 * can pick them up from prog->aux.
  1197			 */
  1198			if (!extra_pass)
  1199				addr = NULL;
  1200			else if (prog->aux->func &&
  1201				 off >= 0 && off < prog->aux->func_cnt)
  1202				addr = (u8 *)prog->aux->func[off]->bpf_func;
  1203			else
  1204				return -EINVAL;
  1205			fixed = false;
  1206			break;
  1207		case 0:
  1208			/* Address of a BPF helper call. Since part of the core
  1209			 * kernel, it's always at a fixed location. __bpf_call_base
  1210			 * and the helper with imm relative to it are both in core
  1211			 * kernel.
  1212			 */
  1213			addr = (u8 *)__bpf_call_base + imm;
  1214			fixed = true;
  1215			break;
  1216		case BPF_PSEUDO_KFUNC_CALL:
  1217			err = bpf_get_kfunc_addr(prog, imm, off, &addr);
  1218			if (err)
  1219				return err;
  1220			fixed = true;
  1221			break;
  1222		default:
  1223			return -EINVAL;
  1224		}
  1225	
  1226		*func_addr_fixed = fixed;
> 1227		*func_addr = addr;
  1228		return 0;
  1229	}
  1230	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
  2023-02-14 23:14   ` kernel test robot
@ 2023-02-14 23:58   ` Stanislav Fomichev
  2023-02-15 10:07     ` Ilya Leoshkevich
  2023-02-16  7:46   ` kernel test robot
  2023-02-16 16:33   ` kernel test robot
  3 siblings, 1 reply; 12+ messages in thread
From: Stanislav Fomichev @ 2023-02-14 23:58 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On 02/14, Ilya Leoshkevich wrote:
> test_ksyms_module fails to emit a kfunc call targeting a module on
> s390x, because the verifier stores the difference between kfunc
> address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
> are roughly (1 << 42) bytes away from the kernel on s390x.

> Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
> as usual by calling bpf_jit_get_func_addr().

> This also fixes the problem with XDP metadata functions outlined in
> the description of commit 63d7b53ab59f ("s390/bpf: Implement
> bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
> id lookups. This eliminates the inconsistency between "abstract" XDP
> metadata functions' BTF ids and their concrete addresses.

> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---
>   include/linux/bpf.h   |  2 ++
>   kernel/bpf/core.c     | 23 ++++++++++---
>   kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
>   3 files changed, 45 insertions(+), 59 deletions(-)

> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index be34f7deb6c3..83ce94d11484 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog  
> *prog);
>   const struct btf_func_model *
>   bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
>   			 const struct bpf_insn *insn);
> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16  
> offset,
> +		       u8 **func_addr);
>   struct bpf_core_ctx {
>   	struct bpf_verifier_log *log;
>   	const struct btf *btf;
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 3390961c4e10..a42382afe333 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog  
> *prog,
>   {
>   	s16 off = insn->off;
>   	s32 imm = insn->imm;
> +	bool fixed;
>   	u8 *addr;
> +	int err;

> -	*func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> -	if (!*func_addr_fixed) {
> +	switch (insn->src_reg) {
> +	case BPF_PSEUDO_CALL:
>   		/* Place-holder address till the last pass has collected
>   		 * all addresses for JITed subprograms in which case we
>   		 * can pick them up from prog->aux.
> @@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct bpf_prog  
> *prog,
>   			addr = (u8 *)prog->aux->func[off]->bpf_func;
>   		else
>   			return -EINVAL;
> -	} else {
> +		fixed = false;
> +		break;

[..]

> +	case 0:

nit: Should we define BPF_HELPER_CALL here for consistency?

>   		/* Address of a BPF helper call. Since part of the core
>   		 * kernel, it's always at a fixed location. __bpf_call_base
>   		 * and the helper with imm relative to it are both in core
>   		 * kernel.
>   		 */
>   		addr = (u8 *)__bpf_call_base + imm;
> +		fixed = true;
> +		break;
> +	case BPF_PSEUDO_KFUNC_CALL:
> +		err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> +		if (err)
> +			return err;
> +		fixed = true;
> +		break;
> +	default:
> +		return -EINVAL;
>   	}

> -	*func_addr = (unsigned long)addr;
> +	*func_addr_fixed = fixed;
> +	*func_addr = addr;
>   	return 0;
>   }

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 21e08c111702..aea59974f0d6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env  
> *env, int off)
>   struct bpf_kfunc_desc {
>   	struct btf_func_model func_model;
>   	u32 func_id;
> -	s32 imm;
>   	u16 offset;
> +	unsigned long addr;
>   };

>   struct bpf_kfunc_btf {
> @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32  
> func_id, u16 offset)
>   		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
>   }


[..]

> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16  
> offset,
> +		       u8 **func_addr)
> +{
> +	const struct bpf_kfunc_desc *desc;
> +
> +	desc = find_kfunc_desc(prog, func_id, offset);
> +	if (WARN_ON_ONCE(!desc))
> +		return -EINVAL;
> +
> +	*func_addr = (u8 *)desc->addr;
> +	return 0;
> +}

This function isn't doing much and has a single caller. Should we just
export find_kfunc_desc?

> +
>   static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
>   					 s16 offset)
>   {
> @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env  
> *env, u32 func_id, s16 offset)
>   	struct bpf_kfunc_desc *desc;
>   	const char *func_name;
>   	struct btf *desc_btf;
> -	unsigned long call_imm;
>   	unsigned long addr;
> +	void *xdp_kfunc;
>   	int err;

>   	prog_aux = env->prog->aux;
> @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env  
> *env, u32 func_id, s16 offset)
>   		return -EINVAL;
>   	}

> -	call_imm = BPF_CALL_IMM(addr);
> -	/* Check whether or not the relative offset overflows desc->imm */
> -	if ((unsigned long)(s32)call_imm != call_imm) {
> -		verbose(env, "address of kernel function %s is out of range\n",
> -			func_name);
> -		return -EINVAL;
> -	}
> -
>   	if (bpf_dev_bound_kfunc_id(func_id)) {
>   		err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
>   		if (err)
>   			return err;
> +
> +		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
> +		if (xdp_kfunc)
> +			addr = (unsigned long)xdp_kfunc;
> +		/* fallback to default kfunc when not supported by netdev */
>   	}

>   	desc = &tab->descs[tab->nr_descs++];
>   	desc->func_id = func_id;
> -	desc->imm = call_imm;
>   	desc->offset = offset;
> +	desc->addr = addr;
>   	err = btf_distill_func_proto(&env->log, desc_btf,
>   				     func_proto, func_name,
>   				     &desc->func_model);
> @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env  
> *env, u32 func_id, s16 offset)
>   	return err;
>   }

> -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> -{
> -	const struct bpf_kfunc_desc *d0 = a;
> -	const struct bpf_kfunc_desc *d1 = b;
> -
> -	if (d0->imm > d1->imm)
> -		return 1;
> -	else if (d0->imm < d1->imm)
> -		return -1;
> -	return 0;
> -}
> -
> -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> -{
> -	struct bpf_kfunc_desc_tab *tab;
> -
> -	tab = prog->aux->kfunc_tab;
> -	if (!tab)
> -		return;
> -
> -	sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> -	     kfunc_desc_cmp_by_imm, NULL);
> -}
> -
>   bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
>   {
>   	return !!prog->aux->kfunc_tab;
> @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog  
> *prog,
>   			 const struct bpf_insn *insn)
>   {
>   	const struct bpf_kfunc_desc desc = {
> -		.imm = insn->imm,
> +		.func_id = insn->imm,
> +		.offset = insn->off,
>   	};
>   	const struct bpf_kfunc_desc *res;
>   	struct bpf_kfunc_desc_tab *tab;

>   	tab = prog->aux->kfunc_tab;
>   	res = bsearch(&desc, tab->descs, tab->nr_descs,
> -		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
> +		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);

>   	return res ? &res->func_model : NULL;
>   }
> @@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct  
> bpf_verifier_env *env, struct bpf_insn *insn,
>   			    struct bpf_insn *insn_buf, int insn_idx, int *cnt)
>   {
>   	const struct bpf_kfunc_desc *desc;
> -	void *xdp_kfunc;

>   	if (!insn->imm) {
>   		verbose(env, "invalid kernel function call not eliminated in verifier  
> pass\n");
> @@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct  
> bpf_verifier_env *env, struct bpf_insn *insn,
>   	}

>   	*cnt = 0;
> -
> -	if (bpf_dev_bound_kfunc_id(insn->imm)) {
> -		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
> -		if (xdp_kfunc) {
> -			insn->imm = BPF_CALL_IMM(xdp_kfunc);
> -			return 0;
> -		}
> -
> -		/* fallback to default kfunc when not supported by netdev */
> -	}
> -
> -	/* insn->imm has the btf func_id. Replace it with
> -	 * an address (relative to __bpf_call_base).
> -	 */
>   	desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
>   	if (!desc) {
>   		verbose(env, "verifier internal error: kernel function descriptor not  
> found for func_id %u\n",
> @@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct  
> bpf_verifier_env *env, struct bpf_insn *insn,
>   		return -EFAULT;
>   	}

> -	insn->imm = desc->imm;
>   	if (insn->off)
>   		return 0;
>   	if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
> @@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct bpf_verifier_env  
> *env)
>   		}
>   	}


[..]

> -	sort_kfunc_descs_by_imm(env->prog);

If we are not doing sorting here, how does the bsearch work?

> -
>   	return 0;
>   }

> --
> 2.39.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-14 23:58   ` Stanislav Fomichev
@ 2023-02-15 10:07     ` Ilya Leoshkevich
  2023-02-15 17:43       ` Stanislav Fomichev
  0 siblings, 1 reply; 12+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 10:07 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> On 02/14, Ilya Leoshkevich wrote:
> > test_ksyms_module fails to emit a kfunc call targeting a module on
> > s390x, because the verifier stores the difference between kfunc
> > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > modules
> > are roughly (1 << 42) bytes away from the kernel on s390x.
> 
> > Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> > and storing the absolute address in bpf_kfunc_desc, which JITs
> > retrieve
> > as usual by calling bpf_jit_get_func_addr().
> 
> > This also fixes the problem with XDP metadata functions outlined in
> > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > bpf_jit_supports_kfunc_call()") by replacing address lookups with
> > BTF
> > id lookups. This eliminates the inconsistency between "abstract"
> > XDP
> > metadata functions' BTF ids and their concrete addresses.
> 
> > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > ---
> >   include/linux/bpf.h   |  2 ++
> >   kernel/bpf/core.c     | 23 ++++++++++---
> >   kernel/bpf/verifier.c | 79 +++++++++++++-------------------------
> > -----
> >   3 files changed, 45 insertions(+), 59 deletions(-)
> 
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index be34f7deb6c3..83ce94d11484 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct
> > bpf_prog  
> > *prog);
> >   const struct btf_func_model *
> >   bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> >                          const struct bpf_insn *insn);
> > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > u16  
> > offset,
> > +                      u8 **func_addr);
> >   struct bpf_core_ctx {
> >         struct bpf_verifier_log *log;
> >         const struct btf *btf;
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 3390961c4e10..a42382afe333 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct
> > bpf_prog  
> > *prog,
> >   {
> >         s16 off = insn->off;
> >         s32 imm = insn->imm;
> > +       bool fixed;
> >         u8 *addr;
> > +       int err;
> 
> > -       *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> > -       if (!*func_addr_fixed) {
> > +       switch (insn->src_reg) {
> > +       case BPF_PSEUDO_CALL:
> >                 /* Place-holder address till the last pass has
> > collected
> >                  * all addresses for JITed subprograms in which
> > case we
> >                  * can pick them up from prog->aux.
> > @@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct
> > bpf_prog  
> > *prog,
> >                         addr = (u8 *)prog->aux->func[off]-
> > >bpf_func;
> >                 else
> >                         return -EINVAL;
> > -       } else {
> > +               fixed = false;
> > +               break;
> 
> [..]
> 
> > +       case 0:
> 
> nit: Should we define BPF_HELPER_CALL here for consistency?

I think this would be good, the verifier currently uses 0 for this
purpose; having a symbolic constant would surely improve readability.

> >                 /* Address of a BPF helper call. Since part of the
> > core
> >                  * kernel, it's always at a fixed location.
> > __bpf_call_base
> >                  * and the helper with imm relative to it are both
> > in core
> >                  * kernel.
> >                  */
> >                 addr = (u8 *)__bpf_call_base + imm;
> > +               fixed = true;
> > +               break;
> > +       case BPF_PSEUDO_KFUNC_CALL:
> > +               err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> > +               if (err)
> > +                       return err;
> > +               fixed = true;
> > +               break;
> > +       default:
> > +               return -EINVAL;
> >         }
> 
> > -       *func_addr = (unsigned long)addr;
> > +       *func_addr_fixed = fixed;
> > +       *func_addr = addr;
> >         return 0;
> >   }
> 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 21e08c111702..aea59974f0d6 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -2115,8 +2115,8 @@ static int add_subprog(struct
> > bpf_verifier_env  
> > *env, int off)
> >   struct bpf_kfunc_desc {
> >         struct btf_func_model func_model;
> >         u32 func_id;
> > -       s32 imm;
> >         u16 offset;
> > +       unsigned long addr;
> >   };
> 
> >   struct bpf_kfunc_btf {
> > @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog,
> > u32  
> > func_id, u16 offset)
> >                        sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_id_off);
> >   }
> 
> 
> [..]
> 
> > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > u16  
> > offset,
> > +                      u8 **func_addr)
> > +{
> > +       const struct bpf_kfunc_desc *desc;
> > +
> > +       desc = find_kfunc_desc(prog, func_id, offset);
> > +       if (WARN_ON_ONCE(!desc))
> > +               return -EINVAL;
> > +
> > +       *func_addr = (u8 *)desc->addr;
> > +       return 0;
> > +}
> 
> This function isn't doing much and has a single caller. Should we
> just
> export find_kfunc_desc?

We would have to export struct bpf_kfunc_desc as well; I thought it's
better to add an extra function so that we could keep hiding the
struct.

> > +
> >   static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env
> > *env,
> >                                          s16 offset)
> >   {
> > @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct
> > bpf_verifier_env  
> > *env, u32 func_id, s16 offset)
> >         struct bpf_kfunc_desc *desc;
> >         const char *func_name;
> >         struct btf *desc_btf;
> > -       unsigned long call_imm;
> >         unsigned long addr;
> > +       void *xdp_kfunc;
> >         int err;
> 
> >         prog_aux = env->prog->aux;
> > @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct
> > bpf_verifier_env  
> > *env, u32 func_id, s16 offset)
> >                 return -EINVAL;
> >         }
> 
> > -       call_imm = BPF_CALL_IMM(addr);
> > -       /* Check whether or not the relative offset overflows desc-
> > >imm */
> > -       if ((unsigned long)(s32)call_imm != call_imm) {
> > -               verbose(env, "address of kernel function %s is out
> > of range\n",
> > -                       func_name);
> > -               return -EINVAL;
> > -       }
> > -
> >         if (bpf_dev_bound_kfunc_id(func_id)) {
> >                 err = bpf_dev_bound_kfunc_check(&env->log,
> > prog_aux);
> >                 if (err)
> >                         return err;
> > +
> > +               xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > func_id);
> > +               if (xdp_kfunc)
> > +                       addr = (unsigned long)xdp_kfunc;
> > +               /* fallback to default kfunc when not supported by
> > netdev */
> >         }
> 
> >         desc = &tab->descs[tab->nr_descs++];
> >         desc->func_id = func_id;
> > -       desc->imm = call_imm;
> >         desc->offset = offset;
> > +       desc->addr = addr;
> >         err = btf_distill_func_proto(&env->log, desc_btf,
> >                                      func_proto, func_name,
> >                                      &desc->func_model);
> > @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct
> > bpf_verifier_env  
> > *env, u32 func_id, s16 offset)
> >         return err;
> >   }
> 
> > -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> > -{
> > -       const struct bpf_kfunc_desc *d0 = a;
> > -       const struct bpf_kfunc_desc *d1 = b;
> > -
> > -       if (d0->imm > d1->imm)
> > -               return 1;
> > -       else if (d0->imm < d1->imm)
> > -               return -1;
> > -       return 0;
> > -}
> > -
> > -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> > -{
> > -       struct bpf_kfunc_desc_tab *tab;
> > -
> > -       tab = prog->aux->kfunc_tab;
> > -       if (!tab)
> > -               return;
> > -
> > -       sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> > -            kfunc_desc_cmp_by_imm, NULL);
> > -}
> > -
> >   bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
> >   {
> >         return !!prog->aux->kfunc_tab;
> > @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct
> > bpf_prog  
> > *prog,
> >                          const struct bpf_insn *insn)
> >   {
> >         const struct bpf_kfunc_desc desc = {
> > -               .imm = insn->imm,
> > +               .func_id = insn->imm,
> > +               .offset = insn->off,
> >         };
> >         const struct bpf_kfunc_desc *res;
> >         struct bpf_kfunc_desc_tab *tab;
> 
> >         tab = prog->aux->kfunc_tab;
> >         res = bsearch(&desc, tab->descs, tab->nr_descs,
> > -                     sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_imm);
> > +                     sizeof(tab->descs[0]),
> > kfunc_desc_cmp_by_id_off);
> 
> >         return res ? &res->func_model : NULL;
> >   }
> > @@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct  
> > bpf_verifier_env *env, struct bpf_insn *insn,
> >                             struct bpf_insn *insn_buf, int
> > insn_idx, int *cnt)
> >   {
> >         const struct bpf_kfunc_desc *desc;
> > -       void *xdp_kfunc;
> 
> >         if (!insn->imm) {
> >                 verbose(env, "invalid kernel function call not
> > eliminated in verifier  
> > pass\n");
> > @@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct  
> > bpf_verifier_env *env, struct bpf_insn *insn,
> >         }
> 
> >         *cnt = 0;
> > -
> > -       if (bpf_dev_bound_kfunc_id(insn->imm)) {
> > -               xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > insn->imm);
> > -               if (xdp_kfunc) {
> > -                       insn->imm = BPF_CALL_IMM(xdp_kfunc);
> > -                       return 0;
> > -               }
> > -
> > -               /* fallback to default kfunc when not supported by
> > netdev */
> > -       }
> > -
> > -       /* insn->imm has the btf func_id. Replace it with
> > -        * an address (relative to __bpf_call_base).
> > -        */
> >         desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
> >         if (!desc) {
> >                 verbose(env, "verifier internal error: kernel
> > function descriptor not  
> > found for func_id %u\n",
> > @@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct  
> > bpf_verifier_env *env, struct bpf_insn *insn,
> >                 return -EFAULT;
> >         }
> 
> > -       insn->imm = desc->imm;
> >         if (insn->off)
> >                 return 0;
> >         if (desc->func_id ==
> > special_kfunc_list[KF_bpf_obj_new_impl]) {
> > @@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct
> > bpf_verifier_env  
> > *env)
> >                 }
> >         }
> 
> 
> [..]
> 
> > -       sort_kfunc_descs_by_imm(env->prog);
> 
> If we are not doing sorting here, how does the bsearch work?

add_kfunc_call() already makes sure that kfuncs are sorted in the
kfunc_desc_cmp_by_id_off() order. fixup_kfunc_call() used to put
addresses into imms and re-sort based on that, however, after this
patch it's not needed anymore: the code (e.g.
bpf_jit_find_kfunc_model()) continues to do lookups in
the kfunc_desc_cmp_by_id_off() order.

> > -
> >         return 0;
> >   }
> 
> > --
> > 2.39.1
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 10:07     ` Ilya Leoshkevich
@ 2023-02-15 17:43       ` Stanislav Fomichev
  2023-02-15 17:49         ` Ilya Leoshkevich
  0 siblings, 1 reply; 12+ messages in thread
From: Stanislav Fomichev @ 2023-02-15 17:43 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On 02/15, Ilya Leoshkevich wrote:
> On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > On 02/14, Ilya Leoshkevich wrote:
> > > test_ksyms_module fails to emit a kfunc call targeting a module on
> > > s390x, because the verifier stores the difference between kfunc
> > > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > > modules
> > > are roughly (1 << 42) bytes away from the kernel on s390x.
> >
> > > Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> > > and storing the absolute address in bpf_kfunc_desc, which JITs
> > > retrieve
> > > as usual by calling bpf_jit_get_func_addr().
> >
> > > This also fixes the problem with XDP metadata functions outlined in
> > > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > > bpf_jit_supports_kfunc_call()") by replacing address lookups with
> > > BTF
> > > id lookups. This eliminates the inconsistency between "abstract"
> > > XDP
> > > metadata functions' BTF ids and their concrete addresses.
> >
> > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > ---
> > >   include/linux/bpf.h   |  2 ++
> > >   kernel/bpf/core.c     | 23 ++++++++++---
> > >   kernel/bpf/verifier.c | 79 +++++++++++++-------------------------
> > > -----
> > >   3 files changed, 45 insertions(+), 59 deletions(-)
> >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index be34f7deb6c3..83ce94d11484 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -2227,6 +2227,8 @@ bool bpf_prog_has_kfunc_call(const struct
> > > bpf_prog 
> > > *prog);
> > >   const struct btf_func_model *
> > >   bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> > >                          const struct bpf_insn *insn);
> > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > > u16 
> > > offset,
> > > +                      u8 **func_addr);
> > >   struct bpf_core_ctx {
> > >         struct bpf_verifier_log *log;
> > >         const struct btf *btf;
> > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > > index 3390961c4e10..a42382afe333 100644
> > > --- a/kernel/bpf/core.c
> > > +++ b/kernel/bpf/core.c
> > > @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct
> > > bpf_prog 
> > > *prog,
> > >   {
> > >         s16 off = insn->off;
> > >         s32 imm = insn->imm;
> > > +       bool fixed;
> > >         u8 *addr;
> > > +       int err;
> >
> > > -       *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> > > -       if (!*func_addr_fixed) {
> > > +       switch (insn->src_reg) {
> > > +       case BPF_PSEUDO_CALL:
> > >                 /* Place-holder address till the last pass has
> > > collected
> > >                  * all addresses for JITed subprograms in which
> > > case we
> > >                  * can pick them up from prog->aux.
> > > @@ -1200,16 +1202,29 @@ int bpf_jit_get_func_addr(const struct
> > > bpf_prog 
> > > *prog,
> > >                         addr = (u8 *)prog->aux->func[off]-
> > > >bpf_func;
> > >                 else
> > >                         return -EINVAL;
> > > -       } else {
> > > +               fixed = false;
> > > +               break;
> >
> > [..]
> >
> > > +       case 0:
> >
> > nit: Should we define BPF_HELPER_CALL here for consistency?

> I think this would be good, the verifier currently uses 0 for this
> purpose; having a symbolic constant would surely improve readability.

> > >                 /* Address of a BPF helper call. Since part of the
> > > core
> > >                  * kernel, it's always at a fixed location.
> > > __bpf_call_base
> > >                  * and the helper with imm relative to it are both
> > > in core
> > >                  * kernel.
> > >                  */
> > >                 addr = (u8 *)__bpf_call_base + imm;
> > > +               fixed = true;
> > > +               break;
> > > +       case BPF_PSEUDO_KFUNC_CALL:
> > > +               err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> > > +               if (err)
> > > +                       return err;
> > > +               fixed = true;
> > > +               break;
> > > +       default:
> > > +               return -EINVAL;
> > >         }
> >
> > > -       *func_addr = (unsigned long)addr;
> > > +       *func_addr_fixed = fixed;
> > > +       *func_addr = addr;
> > >         return 0;
> > >   }
> >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 21e08c111702..aea59974f0d6 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -2115,8 +2115,8 @@ static int add_subprog(struct
> > > bpf_verifier_env 
> > > *env, int off)
> > >   struct bpf_kfunc_desc {
> > >         struct btf_func_model func_model;
> > >         u32 func_id;
> > > -       s32 imm;
> > >         u16 offset;
> > > +       unsigned long addr;
> > >   };
> >
> > >   struct bpf_kfunc_btf {
> > > @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog,
> > > u32 
> > > func_id, u16 offset)
> > >                        sizeof(tab->descs[0]),
> > > kfunc_desc_cmp_by_id_off);
> > >   }
> >
> >
> > [..]
> >
> > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > > u16 
> > > offset,
> > > +                      u8 **func_addr)
> > > +{
> > > +       const struct bpf_kfunc_desc *desc;
> > > +
> > > +       desc = find_kfunc_desc(prog, func_id, offset);
> > > +       if (WARN_ON_ONCE(!desc))
> > > +               return -EINVAL;
> > > +
> > > +       *func_addr = (u8 *)desc->addr;
> > > +       return 0;
> > > +}
> >
> > This function isn't doing much and has a single caller. Should we
> > just
> > export find_kfunc_desc?

> We would have to export struct bpf_kfunc_desc as well; I thought it's
> better to add an extra function so that we could keep hiding the
> struct.

Ah, good point. In this case seems ok to have this extra wrapper.
On that note: what's the purpose of WARN_ON_ONCE here?


> > > +
> > >   static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env
> > > *env,
> > >                                          s16 offset)
> > >   {
> > > @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct
> > > bpf_verifier_env 
> > > *env, u32 func_id, s16 offset)
> > >         struct bpf_kfunc_desc *desc;
> > >         const char *func_name;
> > >         struct btf *desc_btf;
> > > -       unsigned long call_imm;
> > >         unsigned long addr;
> > > +       void *xdp_kfunc;
> > >         int err;
> >
> > >         prog_aux = env->prog->aux;
> > > @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct
> > > bpf_verifier_env 
> > > *env, u32 func_id, s16 offset)
> > >                 return -EINVAL;
> > >         }
> >
> > > -       call_imm = BPF_CALL_IMM(addr);
> > > -       /* Check whether or not the relative offset overflows desc-
> > > >imm */
> > > -       if ((unsigned long)(s32)call_imm != call_imm) {
> > > -               verbose(env, "address of kernel function %s is out
> > > of range\n",
> > > -                       func_name);
> > > -               return -EINVAL;
> > > -       }
> > > -
> > >         if (bpf_dev_bound_kfunc_id(func_id)) {
> > >                 err = bpf_dev_bound_kfunc_check(&env->log,
> > > prog_aux);
> > >                 if (err)
> > >                         return err;
> > > +
> > > +               xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > > func_id);
> > > +               if (xdp_kfunc)
> > > +                       addr = (unsigned long)xdp_kfunc;
> > > +               /* fallback to default kfunc when not supported by
> > > netdev */
> > >         }
> >
> > >         desc = &tab->descs[tab->nr_descs++];
> > >         desc->func_id = func_id;
> > > -       desc->imm = call_imm;
> > >         desc->offset = offset;
> > > +       desc->addr = addr;
> > >         err = btf_distill_func_proto(&env->log, desc_btf,
> > >                                      func_proto, func_name,
> > >                                      &desc->func_model);
> > > @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct
> > > bpf_verifier_env 
> > > *env, u32 func_id, s16 offset)
> > >         return err;
> > >   }
> >
> > > -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> > > -{
> > > -       const struct bpf_kfunc_desc *d0 = a;
> > > -       const struct bpf_kfunc_desc *d1 = b;
> > > -
> > > -       if (d0->imm > d1->imm)
> > > -               return 1;
> > > -       else if (d0->imm < d1->imm)
> > > -               return -1;
> > > -       return 0;
> > > -}
> > > -
> > > -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> > > -{
> > > -       struct bpf_kfunc_desc_tab *tab;
> > > -
> > > -       tab = prog->aux->kfunc_tab;
> > > -       if (!tab)
> > > -               return;
> > > -
> > > -       sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> > > -            kfunc_desc_cmp_by_imm, NULL);
> > > -}
> > > -
> > >   bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
> > >   {
> > >         return !!prog->aux->kfunc_tab;
> > > @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct
> > > bpf_prog 
> > > *prog,
> > >                          const struct bpf_insn *insn)
> > >   {
> > >         const struct bpf_kfunc_desc desc = {
> > > -               .imm = insn->imm,
> > > +               .func_id = insn->imm,
> > > +               .offset = insn->off,
> > >         };
> > >         const struct bpf_kfunc_desc *res;
> > >         struct bpf_kfunc_desc_tab *tab;
> >
> > >         tab = prog->aux->kfunc_tab;
> > >         res = bsearch(&desc, tab->descs, tab->nr_descs,
> > > -                     sizeof(tab->descs[0]),
> > > kfunc_desc_cmp_by_imm);
> > > +                     sizeof(tab->descs[0]),
> > > kfunc_desc_cmp_by_id_off);
> >
> > >         return res ? &res->func_model : NULL;
> > >   }
> > > @@ -16251,7 +16238,6 @@ static int fixup_kfunc_call(struct 
> > > bpf_verifier_env *env, struct bpf_insn *insn,
> > >                             struct bpf_insn *insn_buf, int
> > > insn_idx, int *cnt)
> > >   {
> > >         const struct bpf_kfunc_desc *desc;
> > > -       void *xdp_kfunc;
> >
> > >         if (!insn->imm) {
> > >                 verbose(env, "invalid kernel function call not
> > > eliminated in verifier 
> > > pass\n");
> > > @@ -16259,20 +16245,6 @@ static int fixup_kfunc_call(struct 
> > > bpf_verifier_env *env, struct bpf_insn *insn,
> > >         }
> >
> > >         *cnt = 0;
> > > -
> > > -       if (bpf_dev_bound_kfunc_id(insn->imm)) {
> > > -               xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog,
> > > insn->imm);
> > > -               if (xdp_kfunc) {
> > > -                       insn->imm = BPF_CALL_IMM(xdp_kfunc);
> > > -                       return 0;
> > > -               }
> > > -
> > > -               /* fallback to default kfunc when not supported by
> > > netdev */
> > > -       }
> > > -
> > > -       /* insn->imm has the btf func_id. Replace it with
> > > -        * an address (relative to __bpf_call_base).
> > > -        */
> > >         desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
> > >         if (!desc) {
> > >                 verbose(env, "verifier internal error: kernel
> > > function descriptor not 
> > > found for func_id %u\n",
> > > @@ -16280,7 +16252,6 @@ static int fixup_kfunc_call(struct 
> > > bpf_verifier_env *env, struct bpf_insn *insn,
> > >                 return -EFAULT;
> > >         }
> >
> > > -       insn->imm = desc->imm;
> > >         if (insn->off)
> > >                 return 0;
> > >         if (desc->func_id ==
> > > special_kfunc_list[KF_bpf_obj_new_impl]) {
> > > @@ -16834,8 +16805,6 @@ static int do_misc_fixups(struct
> > > bpf_verifier_env 
> > > *env)
> > >                 }
> > >         }
> >
> >
> > [..]
> >
> > > -       sort_kfunc_descs_by_imm(env->prog);
> >
> > If we are not doing sorting here, how does the bsearch work?

> add_kfunc_call() already makes sure that kfuncs are sorted in the
> kfunc_desc_cmp_by_id_off() order. fixup_kfunc_call() used to put
> addresses into imms and re-sort based on that, however, after this
> patch it's not needed anymore: the code (e.g.
> bpf_jit_find_kfunc_model()) continues to do lookups in
> the kfunc_desc_cmp_by_id_off() order.

Makes sense, thank you 👍

> > > -
> > >         return 0;
> > >   }
> >
> > > --
> > > 2.39.1
> >


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 17:43       ` Stanislav Fomichev
@ 2023-02-15 17:49         ` Ilya Leoshkevich
  2023-02-15 18:33           ` Stanislav Fomichev
  0 siblings, 1 reply; 12+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 17:49 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On Wed, 2023-02-15 at 09:43 -0800, Stanislav Fomichev wrote:
> On 02/15, Ilya Leoshkevich wrote:
> > On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > > On 02/14, Ilya Leoshkevich wrote:
> > > > test_ksyms_module fails to emit a kfunc call targeting a module
> > > > on
> > > > s390x, because the verifier stores the difference between kfunc
> > > > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > > > modules
> > > > are roughly (1 << 42) bytes away from the kernel on s390x.
> > > 
> > > > Fix by keeping BTF id in bpf_insn.imm for
> > > > BPF_PSEUDO_KFUNC_CALLs,
> > > > and storing the absolute address in bpf_kfunc_desc, which JITs
> > > > retrieve
> > > > as usual by calling bpf_jit_get_func_addr().
> > > 
> > > > This also fixes the problem with XDP metadata functions
> > > > outlined in
> > > > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > > > bpf_jit_supports_kfunc_call()") by replacing address lookups
> > > > with
> > > > BTF
> > > > id lookups. This eliminates the inconsistency between
> > > > "abstract"
> > > > XDP
> > > > metadata functions' BTF ids and their concrete addresses.
> > > 
> > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > ---
> > > >   include/linux/bpf.h   |  2 ++
> > > >   kernel/bpf/core.c     | 23 ++++++++++---
> > > >   kernel/bpf/verifier.c | 79 +++++++++++++---------------------
> > > > ----
> > > > -----
> > > >   3 files changed, 45 insertions(+), 59 deletions(-)
> > 

[...]

> > > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32
> > > > func_id,
> > > > u16 
> > > > offset,
> > > > +                      u8 **func_addr)
> > > > +{
> > > > +       const struct bpf_kfunc_desc *desc;
> > > > +
> > > > +       desc = find_kfunc_desc(prog, func_id, offset);
> > > > +       if (WARN_ON_ONCE(!desc))
> > > > +               return -EINVAL;
> > > > +
> > > > +       *func_addr = (u8 *)desc->addr;
> > > > +       return 0;
> > > > +}
> > > 
> > > This function isn't doing much and has a single caller. Should we
> > > just
> > > export find_kfunc_desc?
> 
> > We would have to export struct bpf_kfunc_desc as well; I thought
> > it's
> > better to add an extra function so that we could keep hiding the
> > struct.
> 
> Ah, good point. In this case seems ok to have this extra wrapper.
> On that note: what's the purpose of WARN_ON_ONCE here?

We can hit this only due to an internal verifier/JIT error, so it would
be good to get some indication of this happening. In verifier.c we have
verbose() function for that, but this function is called during JITing.

[...]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 17:49         ` Ilya Leoshkevich
@ 2023-02-15 18:33           ` Stanislav Fomichev
  2023-02-15 21:54             ` Ilya Leoshkevich
  0 siblings, 1 reply; 12+ messages in thread
From: Stanislav Fomichev @ 2023-02-15 18:33 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On 02/15, Ilya Leoshkevich wrote:
> On Wed, 2023-02-15 at 09:43 -0800, Stanislav Fomichev wrote:
> > On 02/15, Ilya Leoshkevich wrote:
> > > On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > > > On 02/14, Ilya Leoshkevich wrote:
> > > > > test_ksyms_module fails to emit a kfunc call targeting a module
> > > > > on
> > > > > s390x, because the verifier stores the difference between kfunc
> > > > > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > > > > modules
> > > > > are roughly (1 << 42) bytes away from the kernel on s390x.
> > > >
> > > > > Fix by keeping BTF id in bpf_insn.imm for
> > > > > BPF_PSEUDO_KFUNC_CALLs,
> > > > > and storing the absolute address in bpf_kfunc_desc, which JITs
> > > > > retrieve
> > > > > as usual by calling bpf_jit_get_func_addr().
> > > >
> > > > > This also fixes the problem with XDP metadata functions
> > > > > outlined in
> > > > > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > > > > bpf_jit_supports_kfunc_call()") by replacing address lookups
> > > > > with
> > > > > BTF
> > > > > id lookups. This eliminates the inconsistency between
> > > > > "abstract"
> > > > > XDP
> > > > > metadata functions' BTF ids and their concrete addresses.
> > > >
> > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > ---
> > > > > � include/linux/bpf.h�� |� 2 ++
> > > > > � kernel/bpf/core.c���� | 23 ++++++++++---
> > > > > � kernel/bpf/verifier.c | 79 +++++++++++++---------------------
> > > > > ----
> > > > > -----
> > > > > � 3 files changed, 45 insertions(+), 59 deletions(-)
> > >

> [...]

> > > > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32
> > > > > func_id,
> > > > > u16�
> > > > > offset,
> > > > > +��������������������� u8 **func_addr)
> > > > > +{
> > > > > +�������const struct bpf_kfunc_desc *desc;
> > > > > +
> > > > > +�������desc = find_kfunc_desc(prog, func_id, offset);
> > > > > +�������if (WARN_ON_ONCE(!desc))
> > > > > +���������������return -EINVAL;
> > > > > +
> > > > > +�������*func_addr = (u8 *)desc->addr;
> > > > > +�������return 0;
> > > > > +}
> > > >
> > > > This function isn't doing much and has a single caller. Should we
> > > > just
> > > > export find_kfunc_desc?
> >
> > > We would have to export struct bpf_kfunc_desc as well; I thought
> > > it's
> > > better to add an extra function so that we could keep hiding the
> > > struct.
> >
> > Ah, good point. In this case seems ok to have this extra wrapper.
> > On that note: what's the purpose of WARN_ON_ONCE here?

> We can hit this only due to an internal verifier/JIT error, so it would
> be good to get some indication of this happening. In verifier.c we have
> verbose() function for that, but this function is called during JITing.

> [...]

 From my point of view, reading the code, it makes it a bit confusing. If  
there
is a WARN_ON_ONCE, I'm assuming it's guarding against some kind of internal
inconsistency that can happen.

What kind of inconsistency is it guarding against here? We seem to have
find_kfunc_desc in fixup_kfunc_call that checks the same insn->imm
and returns early.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 18:33           ` Stanislav Fomichev
@ 2023-02-15 21:54             ` Ilya Leoshkevich
  2023-02-15 21:59               ` Alexei Starovoitov
  0 siblings, 1 reply; 12+ messages in thread
From: Ilya Leoshkevich @ 2023-02-15 21:54 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On Wed, 2023-02-15 at 10:33 -0800, Stanislav Fomichev wrote:
> On 02/15, Ilya Leoshkevich wrote:
> > On Wed, 2023-02-15 at 09:43 -0800, Stanislav Fomichev wrote:
> > > On 02/15, Ilya Leoshkevich wrote:
> > > > On Tue, 2023-02-14 at 15:58 -0800, Stanislav Fomichev wrote:
> > > > > On 02/14, Ilya Leoshkevich wrote:
> > > > > > test_ksyms_module fails to emit a kfunc call targeting a
> > > > > > module
> > > > > > on
> > > > > > s390x, because the verifier stores the difference between
> > > > > > kfunc
> > > > > > address and __bpf_call_base in bpf_insn.imm, which is s32,
> > > > > > and
> > > > > > modules
> > > > > > are roughly (1 << 42) bytes away from the kernel on s390x.
> > > > > 
> > > > > > Fix by keeping BTF id in bpf_insn.imm for
> > > > > > BPF_PSEUDO_KFUNC_CALLs,
> > > > > > and storing the absolute address in bpf_kfunc_desc, which
> > > > > > JITs
> > > > > > retrieve
> > > > > > as usual by calling bpf_jit_get_func_addr().
> > > > > 
> > > > > > This also fixes the problem with XDP metadata functions
> > > > > > outlined in
> > > > > > the description of commit 63d7b53ab59f ("s390/bpf:
> > > > > > Implement
> > > > > > bpf_jit_supports_kfunc_call()") by replacing address
> > > > > > lookups
> > > > > > with
> > > > > > BTF
> > > > > > id lookups. This eliminates the inconsistency between
> > > > > > "abstract"
> > > > > > XDP
> > > > > > metadata functions' BTF ids and their concrete addresses.
> > > > > 
> > > > > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > > ---
> > > > > > � include/linux/bpf.h�� |� 2 ++
> > > > > > � kernel/bpf/core.c���� | 23 ++++++++++---
> > > > > > � kernel/bpf/verifier.c | 79 +++++++++++++-----------------
> > > > > > ----
> > > > > > ----
> > > > > > -----
> > > > > > � 3 files changed, 45 insertions(+), 59 deletions(-)
> > > > 
> 
> > [...]
> 
> > > > > > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32
> > > > > > func_id,
> > > > > > u16�
> > > > > > offset,
> > > > > > +��������������������� u8 **func_addr)
> > > > > > +{
> > > > > > +�������const struct bpf_kfunc_desc *desc;
> > > > > > +
> > > > > > +�������desc = find_kfunc_desc(prog, func_id, offset);
> > > > > > +�������if (WARN_ON_ONCE(!desc))
> > > > > > +���������������return -EINVAL;
> > > > > > +
> > > > > > +�������*func_addr = (u8 *)desc->addr;
> > > > > > +�������return 0;
> > > > > > +}
> > > > > 
> > > > > This function isn't doing much and has a single caller.
> > > > > Should we
> > > > > just
> > > > > export find_kfunc_desc?
> > > 
> > > > We would have to export struct bpf_kfunc_desc as well; I
> > > > thought
> > > > it's
> > > > better to add an extra function so that we could keep hiding
> > > > the
> > > > struct.
> > > 
> > > Ah, good point. In this case seems ok to have this extra wrapper.
> > > On that note: what's the purpose of WARN_ON_ONCE here?
> 
> > We can hit this only due to an internal verifier/JIT error, so it
> > would
> > be good to get some indication of this happening. In verifier.c we
> > have
> > verbose() function for that, but this function is called during
> > JITing.
> 
> > [...]
> 
>  From my point of view, reading the code, it makes it a bit
> confusing. If  
> there
> is a WARN_ON_ONCE, I'm assuming it's guarding against some kind of
> internal
> inconsistency that can happen.
> 
> What kind of inconsistency is it guarding against here? We seem to
> have
> find_kfunc_desc in fixup_kfunc_call that checks the same insn->imm
> and returns early.

The potential inconsistency is the situation when the check in
fixup_kfunc_call() passes, and the very same check here fails.

We could say it's impossible and directly dereference desc. But if
there is a bug and it still happens, the result is a panic or an oops.

Or we could still check whether it's NULL. And if we add such a check,
adding a WARN_ON_ONCE() on top is a cheap way to quickly pinpoint the
potential user-observable issues.

Of course, it's a defensive programming / gut feeling thing. I don't
think comparing all pointers to NULL before derefencing them is a good
idea. It's just that here the first NULL comparison is quite far away,
and with time code that modifies kfunc_tab may be added in between, in
which case this might come in handy.

In any case, I don't have a really strong opinion here, just explaining
my thought process. If the consensus is to drop it, I would not mind
dropping it at all.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-15 21:54             ` Ilya Leoshkevich
@ 2023-02-15 21:59               ` Alexei Starovoitov
  0 siblings, 0 replies; 12+ messages in thread
From: Alexei Starovoitov @ 2023-02-15 21:59 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Stanislav Fomichev, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Jiri Olsa

On Wed, Feb 15, 2023 at 1:55 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> In any case, I don't have a really strong opinion here, just explaining
> my thought process. If the consensus is to drop it, I would not mind
> dropping it at all.

I think doing:
+       desc = find_kfunc_desc(prog, func_id, offset);
+       if (!desc)
+               return -EFAULT;

would be enough to catch that verifier issue.
We have similar code in several places.
When we can print verbose() into the verifier log, we do.
In other cases we just EFAULT which is a sign that something
wrong with the verifier.
We don't use WARN in all those cases.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
  2023-02-14 23:14   ` kernel test robot
  2023-02-14 23:58   ` Stanislav Fomichev
@ 2023-02-16  7:46   ` kernel test robot
  2023-02-16 16:33   ` kernel test robot
  3 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-02-16  7:46 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: llvm, oe-kbuild-all

Hi Ilya,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Ilya-Leoshkevich/bpf-Support-64-bit-pointers-to-kfuncs/20230215-054702
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230214212809.242632-2-iii%40linux.ibm.com
patch subject: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
config: i386-randconfig-a002-20230213 (https://download.01.org/0day-ci/archive/20230216/202302161535.Vb9cejVb-lkp@intel.com/config)
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/5b93c73d640845b2e5ddd1b4af608f4896379002
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Ilya-Leoshkevich/bpf-Support-64-bit-pointers-to-kfuncs/20230215-054702
        git checkout 5b93c73d640845b2e5ddd1b4af608f4896379002
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash kernel/bpf/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202302161535.Vb9cejVb-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> kernel/bpf/core.c:1227:13: warning: incompatible pointer to integer conversion assigning to 'u64' (aka 'unsigned long long') from 'u8 *' (aka 'unsigned char *') [-Wint-conversion]
           *func_addr = addr;
                      ^ ~~~~
   1 warning generated.


vim +1227 kernel/bpf/core.c

  1181	
  1182	int bpf_jit_get_func_addr(const struct bpf_prog *prog,
  1183				  const struct bpf_insn *insn, bool extra_pass,
  1184				  u64 *func_addr, bool *func_addr_fixed)
  1185	{
  1186		s16 off = insn->off;
  1187		s32 imm = insn->imm;
  1188		bool fixed;
  1189		u8 *addr;
  1190		int err;
  1191	
  1192		switch (insn->src_reg) {
  1193		case BPF_PSEUDO_CALL:
  1194			/* Place-holder address till the last pass has collected
  1195			 * all addresses for JITed subprograms in which case we
  1196			 * can pick them up from prog->aux.
  1197			 */
  1198			if (!extra_pass)
  1199				addr = NULL;
  1200			else if (prog->aux->func &&
  1201				 off >= 0 && off < prog->aux->func_cnt)
  1202				addr = (u8 *)prog->aux->func[off]->bpf_func;
  1203			else
  1204				return -EINVAL;
  1205			fixed = false;
  1206			break;
  1207		case 0:
  1208			/* Address of a BPF helper call. Since part of the core
  1209			 * kernel, it's always at a fixed location. __bpf_call_base
  1210			 * and the helper with imm relative to it are both in core
  1211			 * kernel.
  1212			 */
  1213			addr = (u8 *)__bpf_call_base + imm;
  1214			fixed = true;
  1215			break;
  1216		case BPF_PSEUDO_KFUNC_CALL:
  1217			err = bpf_get_kfunc_addr(prog, imm, off, &addr);
  1218			if (err)
  1219				return err;
  1220			fixed = true;
  1221			break;
  1222		default:
  1223			return -EINVAL;
  1224		}
  1225	
  1226		*func_addr_fixed = fixed;
> 1227		*func_addr = addr;
  1228		return 0;
  1229	}
  1230	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
  2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
                     ` (2 preceding siblings ...)
  2023-02-16  7:46   ` kernel test robot
@ 2023-02-16 16:33   ` kernel test robot
  3 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-02-16 16:33 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: oe-kbuild-all

Hi Ilya,

[FYI, it's a private test report for your RFC patch.]
[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Ilya-Leoshkevich/bpf-Support-64-bit-pointers-to-kfuncs/20230215-054702
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230214212809.242632-2-iii%40linux.ibm.com
patch subject: [PATCH RFC bpf-next 1/1] bpf: Support 64-bit pointers to kfuncs
config: loongarch-buildonly-randconfig-r003-20230214 (https://download.01.org/0day-ci/archive/20230217/202302170058.5ftAyjXt-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/5b93c73d640845b2e5ddd1b4af608f4896379002
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Ilya-Leoshkevich/bpf-Support-64-bit-pointers-to-kfuncs/20230215-054702
        git checkout 5b93c73d640845b2e5ddd1b4af608f4896379002
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202302170058.5ftAyjXt-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/bpf/core.c: In function 'bpf_jit_get_func_addr':
>> kernel/bpf/core.c:1217:23: error: implicit declaration of function 'bpf_get_kfunc_addr'; did you mean 'bpf_jit_get_func_addr'? [-Werror=implicit-function-declaration]
    1217 |                 err = bpf_get_kfunc_addr(prog, imm, off, &addr);
         |                       ^~~~~~~~~~~~~~~~~~
         |                       bpf_jit_get_func_addr
   kernel/bpf/core.c:1227:20: warning: assignment to 'u64' {aka 'long long unsigned int'} from 'u8 *' {aka 'unsigned char *'} makes integer from pointer without a cast [-Wint-conversion]
    1227 |         *func_addr = addr;
         |                    ^
   kernel/bpf/core.c: At top level:
   kernel/bpf/core.c:1646:12: warning: no previous prototype for 'bpf_probe_read_kernel' [-Wmissing-prototypes]
    1646 | u64 __weak bpf_probe_read_kernel(void *dst, u32 size, const void *unsafe_ptr)
         |            ^~~~~~~~~~~~~~~~~~~~~
   kernel/bpf/core.c:2085:6: warning: no previous prototype for 'bpf_patch_call_args' [-Wmissing-prototypes]
    2085 | void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
         |      ^~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +1217 kernel/bpf/core.c

  1181	
  1182	int bpf_jit_get_func_addr(const struct bpf_prog *prog,
  1183				  const struct bpf_insn *insn, bool extra_pass,
  1184				  u64 *func_addr, bool *func_addr_fixed)
  1185	{
  1186		s16 off = insn->off;
  1187		s32 imm = insn->imm;
  1188		bool fixed;
  1189		u8 *addr;
  1190		int err;
  1191	
  1192		switch (insn->src_reg) {
  1193		case BPF_PSEUDO_CALL:
  1194			/* Place-holder address till the last pass has collected
  1195			 * all addresses for JITed subprograms in which case we
  1196			 * can pick them up from prog->aux.
  1197			 */
  1198			if (!extra_pass)
  1199				addr = NULL;
  1200			else if (prog->aux->func &&
  1201				 off >= 0 && off < prog->aux->func_cnt)
  1202				addr = (u8 *)prog->aux->func[off]->bpf_func;
  1203			else
  1204				return -EINVAL;
  1205			fixed = false;
  1206			break;
  1207		case 0:
  1208			/* Address of a BPF helper call. Since part of the core
  1209			 * kernel, it's always at a fixed location. __bpf_call_base
  1210			 * and the helper with imm relative to it are both in core
  1211			 * kernel.
  1212			 */
  1213			addr = (u8 *)__bpf_call_base + imm;
  1214			fixed = true;
  1215			break;
  1216		case BPF_PSEUDO_KFUNC_CALL:
> 1217			err = bpf_get_kfunc_addr(prog, imm, off, &addr);
  1218			if (err)
  1219				return err;
  1220			fixed = true;
  1221			break;
  1222		default:
  1223			return -EINVAL;
  1224		}
  1225	
  1226		*func_addr_fixed = fixed;
  1227		*func_addr = addr;
  1228		return 0;
  1229	}
  1230	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-02-16 16:43 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-14 21:28 [PATCH RFC bpf-next 0/1] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
2023-02-14 21:28 ` [PATCH RFC bpf-next 1/1] " Ilya Leoshkevich
2023-02-14 23:14   ` kernel test robot
2023-02-14 23:58   ` Stanislav Fomichev
2023-02-15 10:07     ` Ilya Leoshkevich
2023-02-15 17:43       ` Stanislav Fomichev
2023-02-15 17:49         ` Ilya Leoshkevich
2023-02-15 18:33           ` Stanislav Fomichev
2023-02-15 21:54             ` Ilya Leoshkevich
2023-02-15 21:59               ` Alexei Starovoitov
2023-02-16  7:46   ` kernel test robot
2023-02-16 16:33   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.