* [PATCH bpf-next] bpf, arm64: enable kfunc call
@ 2022-01-19 14:49 ` Hou Tao
0 siblings, 0 replies; 6+ messages in thread
From: Hou Tao @ 2022-01-19 14:49 UTC (permalink / raw)
To: Alexei Starovoitov, Ard Biesheuvel
Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
Andrii Nakryiko, Zi Shen Lim, Will Deacon, Catalin Marinas,
netdev, bpf, linux-arm-kernel, houtao1
Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
randomization range to 2 GB"), for arm64 whether KASLR is enabled
or not, the module is placed within 2GB of the kernel region, so
s32 in bpf_kfunc_desc is sufficient to represente the offset of
module function relative to __bpf_call_base. The only thing needed
is to override bpf_jit_supports_kfunc_call().
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
arch/arm64/net/bpf_jit_comp.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index e96d4d87291f..74f9a9b6a053 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
return prog;
}
+bool bpf_jit_supports_kfunc_call(void)
+{
+ return true;
+}
+
u64 bpf_jit_alloc_exec_limit(void)
{
return VMALLOC_END - VMALLOC_START;
--
2.27.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH bpf-next] bpf, arm64: enable kfunc call
@ 2022-01-19 14:49 ` Hou Tao
0 siblings, 0 replies; 6+ messages in thread
From: Hou Tao @ 2022-01-19 14:49 UTC (permalink / raw)
To: Alexei Starovoitov, Ard Biesheuvel
Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
Andrii Nakryiko, Zi Shen Lim, Will Deacon, Catalin Marinas,
netdev, bpf, linux-arm-kernel, houtao1
Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
randomization range to 2 GB"), for arm64 whether KASLR is enabled
or not, the module is placed within 2GB of the kernel region, so
s32 in bpf_kfunc_desc is sufficient to represente the offset of
module function relative to __bpf_call_base. The only thing needed
is to override bpf_jit_supports_kfunc_call().
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
arch/arm64/net/bpf_jit_comp.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index e96d4d87291f..74f9a9b6a053 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
return prog;
}
+bool bpf_jit_supports_kfunc_call(void)
+{
+ return true;
+}
+
u64 bpf_jit_alloc_exec_limit(void)
{
return VMALLOC_END - VMALLOC_START;
--
2.27.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf, arm64: enable kfunc call
2022-01-19 14:49 ` Hou Tao
@ 2022-01-24 16:21 ` Daniel Borkmann
-1 siblings, 0 replies; 6+ messages in thread
From: Daniel Borkmann @ 2022-01-24 16:21 UTC (permalink / raw)
To: Hou Tao, Alexei Starovoitov, Ard Biesheuvel
Cc: Martin KaFai Lau, Yonghong Song, Andrii Nakryiko, Zi Shen Lim,
Will Deacon, Catalin Marinas, netdev, bpf, linux-arm-kernel
On 1/19/22 3:49 PM, Hou Tao wrote:
> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> randomization range to 2 GB"), for arm64 whether KASLR is enabled
> or not, the module is placed within 2GB of the kernel region, so
> s32 in bpf_kfunc_desc is sufficient to represente the offset of
> module function relative to __bpf_call_base. The only thing needed
> is to override bpf_jit_supports_kfunc_call().
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Lgtm, could we also add a BPF selftest to assert that this assumption
won't break in future when bpf_jit_supports_kfunc_call() returns true?
E.g. extending lib/test_bpf.ko could be an option, wdyt?
> ---
> arch/arm64/net/bpf_jit_comp.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index e96d4d87291f..74f9a9b6a053 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> return prog;
> }
>
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> + return true;
> +}
> +
> u64 bpf_jit_alloc_exec_limit(void)
> {
> return VMALLOC_END - VMALLOC_START;
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf, arm64: enable kfunc call
@ 2022-01-24 16:21 ` Daniel Borkmann
0 siblings, 0 replies; 6+ messages in thread
From: Daniel Borkmann @ 2022-01-24 16:21 UTC (permalink / raw)
To: Hou Tao, Alexei Starovoitov, Ard Biesheuvel
Cc: Martin KaFai Lau, Yonghong Song, Andrii Nakryiko, Zi Shen Lim,
Will Deacon, Catalin Marinas, netdev, bpf, linux-arm-kernel
On 1/19/22 3:49 PM, Hou Tao wrote:
> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> randomization range to 2 GB"), for arm64 whether KASLR is enabled
> or not, the module is placed within 2GB of the kernel region, so
> s32 in bpf_kfunc_desc is sufficient to represente the offset of
> module function relative to __bpf_call_base. The only thing needed
> is to override bpf_jit_supports_kfunc_call().
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Lgtm, could we also add a BPF selftest to assert that this assumption
won't break in future when bpf_jit_supports_kfunc_call() returns true?
E.g. extending lib/test_bpf.ko could be an option, wdyt?
> ---
> arch/arm64/net/bpf_jit_comp.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index e96d4d87291f..74f9a9b6a053 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> return prog;
> }
>
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> + return true;
> +}
> +
> u64 bpf_jit_alloc_exec_limit(void)
> {
> return VMALLOC_END - VMALLOC_START;
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf, arm64: enable kfunc call
2022-01-24 16:21 ` Daniel Borkmann
@ 2022-01-26 11:10 ` Hou Tao
-1 siblings, 0 replies; 6+ messages in thread
From: Hou Tao @ 2022-01-26 11:10 UTC (permalink / raw)
To: Daniel Borkmann, Alexei Starovoitov, Ard Biesheuvel
Cc: Martin KaFai Lau, Yonghong Song, Andrii Nakryiko, Zi Shen Lim,
Will Deacon, Catalin Marinas, netdev, bpf, linux-arm-kernel
Hi,
On 1/25/2022 12:21 AM, Daniel Borkmann wrote:
> On 1/19/22 3:49 PM, Hou Tao wrote:
>> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
>> randomization range to 2 GB"), for arm64 whether KASLR is enabled
>> or not, the module is placed within 2GB of the kernel region, so
>> s32 in bpf_kfunc_desc is sufficient to represente the offset of
>> module function relative to __bpf_call_base. The only thing needed
>> is to override bpf_jit_supports_kfunc_call().
>>
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>
> Lgtm, could we also add a BPF selftest to assert that this assumption
> won't break in future when bpf_jit_supports_kfunc_call() returns true?
>
> E.g. extending lib/test_bpf.ko could be an option, wdyt?
Make sense. Will figure out how to done that.
Regards,
Tao
>
>> ---
>> arch/arm64/net/bpf_jit_comp.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>> index e96d4d87291f..74f9a9b6a053 100644
>> --- a/arch/arm64/net/bpf_jit_comp.c
>> +++ b/arch/arm64/net/bpf_jit_comp.c
>> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog
>> *prog)
>> return prog;
>> }
>> +bool bpf_jit_supports_kfunc_call(void)
>> +{
>> + return true;
>> +}
>> +
>> u64 bpf_jit_alloc_exec_limit(void)
>> {
>> return VMALLOC_END - VMALLOC_START;
>>
>
> .
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next] bpf, arm64: enable kfunc call
@ 2022-01-26 11:10 ` Hou Tao
0 siblings, 0 replies; 6+ messages in thread
From: Hou Tao @ 2022-01-26 11:10 UTC (permalink / raw)
To: Daniel Borkmann, Alexei Starovoitov, Ard Biesheuvel
Cc: Martin KaFai Lau, Yonghong Song, Andrii Nakryiko, Zi Shen Lim,
Will Deacon, Catalin Marinas, netdev, bpf, linux-arm-kernel
Hi,
On 1/25/2022 12:21 AM, Daniel Borkmann wrote:
> On 1/19/22 3:49 PM, Hou Tao wrote:
>> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
>> randomization range to 2 GB"), for arm64 whether KASLR is enabled
>> or not, the module is placed within 2GB of the kernel region, so
>> s32 in bpf_kfunc_desc is sufficient to represente the offset of
>> module function relative to __bpf_call_base. The only thing needed
>> is to override bpf_jit_supports_kfunc_call().
>>
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>
> Lgtm, could we also add a BPF selftest to assert that this assumption
> won't break in future when bpf_jit_supports_kfunc_call() returns true?
>
> E.g. extending lib/test_bpf.ko could be an option, wdyt?
Make sense. Will figure out how to done that.
Regards,
Tao
>
>> ---
>> arch/arm64/net/bpf_jit_comp.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>> index e96d4d87291f..74f9a9b6a053 100644
>> --- a/arch/arm64/net/bpf_jit_comp.c
>> +++ b/arch/arm64/net/bpf_jit_comp.c
>> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog
>> *prog)
>> return prog;
>> }
>> +bool bpf_jit_supports_kfunc_call(void)
>> +{
>> + return true;
>> +}
>> +
>> u64 bpf_jit_alloc_exec_limit(void)
>> {
>> return VMALLOC_END - VMALLOC_START;
>>
>
> .
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-01-26 11:21 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-19 14:49 [PATCH bpf-next] bpf, arm64: enable kfunc call Hou Tao
2022-01-19 14:49 ` Hou Tao
2022-01-24 16:21 ` Daniel Borkmann
2022-01-24 16:21 ` Daniel Borkmann
2022-01-26 11:10 ` Hou Tao
2022-01-26 11:10 ` Hou Tao
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.