bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v3] bpf: reject kfunc calls that overflow insn->imm
@ 2022-02-09  9:11 Hou Tao
  2022-02-09 15:42 ` Yonghong Song
  0 siblings, 1 reply; 4+ messages in thread
From: Hou Tao @ 2022-02-09  9:11 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, Song Liu, John Fastabend, netdev, bpf, houtao1

Now kfunc call uses s32 to represent the offset between the address
of kfunc and __bpf_call_base, but it doesn't check whether or not
s32 will be overflowed, so add an extra checking to reject these
invalid kfunc calls.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
v3:
 * call BPF_CALL_IMM() once (suggested by Yonghong)

v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
 * instead of checking the overflow in selftests, just reject
   these kfunc calls directly in verifier

v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
---
 kernel/bpf/verifier.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1ae41d0cf96c..eb72e6139e2b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
 	struct btf *desc_btf;
+	unsigned long call_imm;
 	unsigned long addr;
 	int err;
 
@@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
+	call_imm = BPF_CALL_IMM(addr);
+	/* Check whether or not the relative offset overflows desc->imm */
+	if ((unsigned long)(s32)call_imm != call_imm) {
+		verbose(env, "address of kernel function %s is out of range\n",
+			func_name);
+		return -EINVAL;
+	}
+
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
-	desc->imm = BPF_CALL_IMM(addr);
+	desc->imm = call_imm;
 	desc->offset = offset;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v3] bpf: reject kfunc calls that overflow insn->imm
  2022-02-09  9:11 [PATCH bpf-next v3] bpf: reject kfunc calls that overflow insn->imm Hou Tao
@ 2022-02-09 15:42 ` Yonghong Song
  2022-02-15  4:29   ` Hou Tao
  0 siblings, 1 reply; 4+ messages in thread
From: Yonghong Song @ 2022-02-09 15:42 UTC (permalink / raw)
  To: Hou Tao, Alexei Starovoitov
  Cc: Martin KaFai Lau, Daniel Borkmann, Andrii Nakryiko, Song Liu,
	John Fastabend, netdev, bpf



On 2/9/22 1:11 AM, Hou Tao wrote:
> Now kfunc call uses s32 to represent the offset between the address
> of kfunc and __bpf_call_base, but it doesn't check whether or not
> s32 will be overflowed, so add an extra checking to reject these
> invalid kfunc calls.
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>

The patch itself looks good. But the commit message
itself doesn't specify whether this is a theoretical case or
could really happen in practice. I look at the patch history,
and find the become commit message in v1 of the patch ([1]):

 > Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
 > randomization range to 2 GB"), for arm64 whether KASLR is enabled
 > or not, the module is placed within 2GB of the kernel region, so
 > s32 in bpf_kfunc_desc is sufficient to represente the offset of
 > module function relative to __bpf_call_base. The only thing needed
 > is to override bpf_jit_supports_kfunc_call().

So it does look like the overflow is possible.

So I suggest you add more description on *when* the overflow
may happen in this patch.

And you can also retain your previous selftest patch to test
this verifier change.

   [1] 
https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/

> ---
> v3:
>   * call BPF_CALL_IMM() once (suggested by Yonghong)
> 
> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>   * instead of checking the overflow in selftests, just reject
>     these kfunc calls directly in verifier
> 
> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
> ---
>   kernel/bpf/verifier.c | 11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 1ae41d0cf96c..eb72e6139e2b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>   	struct bpf_kfunc_desc *desc;
>   	const char *func_name;
>   	struct btf *desc_btf;
> +	unsigned long call_imm;
>   	unsigned long addr;
>   	int err;
>   
> @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>   		return -EINVAL;
>   	}
>   
> +	call_imm = BPF_CALL_IMM(addr);
> +	/* Check whether or not the relative offset overflows desc->imm */
> +	if ((unsigned long)(s32)call_imm != call_imm) {
> +		verbose(env, "address of kernel function %s is out of range\n",
> +			func_name);
> +		return -EINVAL;
> +	}
> +
>   	desc = &tab->descs[tab->nr_descs++];
>   	desc->func_id = func_id;
> -	desc->imm = BPF_CALL_IMM(addr);
> +	desc->imm = call_imm;
>   	desc->offset = offset;
>   	err = btf_distill_func_proto(&env->log, desc_btf,
>   				     func_proto, func_name,

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v3] bpf: reject kfunc calls that overflow insn->imm
  2022-02-09 15:42 ` Yonghong Song
@ 2022-02-15  4:29   ` Hou Tao
  2022-02-15  6:39     ` Yonghong Song
  0 siblings, 1 reply; 4+ messages in thread
From: Hou Tao @ 2022-02-15  4:29 UTC (permalink / raw)
  To: Yonghong Song, Alexei Starovoitov
  Cc: Martin KaFai Lau, Daniel Borkmann, Andrii Nakryiko, Song Liu,
	John Fastabend, netdev, bpf

Hi,

On 2/9/2022 11:42 PM, Yonghong Song wrote:
>
>
> On 2/9/22 1:11 AM, Hou Tao wrote:
>> Now kfunc call uses s32 to represent the offset between the address
>> of kfunc and __bpf_call_base, but it doesn't check whether or not
>> s32 will be overflowed, so add an extra checking to reject these
>> invalid kfunc calls.
>>
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>
> The patch itself looks good. But the commit message
> itself doesn't specify whether this is a theoretical case or
> could really happen in practice. I look at the patch history,
> and find the become commit message in v1 of the patch ([1]):
>
> > Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> > randomization range to 2 GB"), for arm64 whether KASLR is enabled
> > or not, the module is placed within 2GB of the kernel region, so
> > s32 in bpf_kfunc_desc is sufficient to represente the offset of
> > module function relative to __bpf_call_base. The only thing needed
> > is to override bpf_jit_supports_kfunc_call().
>
> So it does look like the overflow is possible.
>
> So I suggest you add more description on *when* the overflow
> may happen in this patch.
Will do in v5.
>
> And you can also retain your previous selftest patch to test
> this verifier change.
Is it necessary ?  IMO it is just duplication of the newly-added logic.

Regards,
Tao

>
>   [1] https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/
>
>> ---
>> v3:
>>   * call BPF_CALL_IMM() once (suggested by Yonghong)
>>
>> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>>   * instead of checking the overflow in selftests, just reject
>>     these kfunc calls directly in verifier
>>
>> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
>> ---
>>   kernel/bpf/verifier.c | 11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 1ae41d0cf96c..eb72e6139e2b 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env,
>> u32 func_id, s16 offset)
>>       struct bpf_kfunc_desc *desc;
>>       const char *func_name;
>>       struct btf *desc_btf;
>> +    unsigned long call_imm;
>>       unsigned long addr;
>>       int err;
>>   @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env
>> *env, u32 func_id, s16 offset)
>>           return -EINVAL;
>>       }
>>   +    call_imm = BPF_CALL_IMM(addr);
>> +    /* Check whether or not the relative offset overflows desc->imm */
>> +    if ((unsigned long)(s32)call_imm != call_imm) {
>> +        verbose(env, "address of kernel function %s is out of range\n",
>> +            func_name);
>> +        return -EINVAL;
>> +    }
>> +
>>       desc = &tab->descs[tab->nr_descs++];
>>       desc->func_id = func_id;
>> -    desc->imm = BPF_CALL_IMM(addr);
>> +    desc->imm = call_imm;
>>       desc->offset = offset;
>>       err = btf_distill_func_proto(&env->log, desc_btf,
>>                        func_proto, func_name,
> .


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v3] bpf: reject kfunc calls that overflow insn->imm
  2022-02-15  4:29   ` Hou Tao
@ 2022-02-15  6:39     ` Yonghong Song
  0 siblings, 0 replies; 4+ messages in thread
From: Yonghong Song @ 2022-02-15  6:39 UTC (permalink / raw)
  To: Hou Tao, Alexei Starovoitov
  Cc: Martin KaFai Lau, Daniel Borkmann, Andrii Nakryiko, Song Liu,
	John Fastabend, netdev, bpf



On 2/14/22 8:29 PM, Hou Tao wrote:
> Hi,
> 
> On 2/9/2022 11:42 PM, Yonghong Song wrote:
>>
>>
>> On 2/9/22 1:11 AM, Hou Tao wrote:
>>> Now kfunc call uses s32 to represent the offset between the address
>>> of kfunc and __bpf_call_base, but it doesn't check whether or not
>>> s32 will be overflowed, so add an extra checking to reject these
>>> invalid kfunc calls.
>>>
>>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>>
>> The patch itself looks good. But the commit message
>> itself doesn't specify whether this is a theoretical case or
>> could really happen in practice. I look at the patch history,
>> and find the become commit message in v1 of the patch ([1]):
>>
>>> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
>>> randomization range to 2 GB"), for arm64 whether KASLR is enabled
>>> or not, the module is placed within 2GB of the kernel region, so
>>> s32 in bpf_kfunc_desc is sufficient to represente the offset of
>>> module function relative to __bpf_call_base. The only thing needed
>>> is to override bpf_jit_supports_kfunc_call().
>>
>> So it does look like the overflow is possible.
>>
>> So I suggest you add more description on *when* the overflow
>> may happen in this patch.
> Will do in v5.
>>
>> And you can also retain your previous selftest patch to test
>> this verifier change.
> Is it necessary ?  IMO it is just duplication of the newly-added logic.

Okay, I just realized that the previous selftest doesn't really
verify the kernel change. That is, it will succeed regardless
of whether the kernel change applied or not. So it is ok not to
have your previous selftest.

> 
> Regards,
> Tao
> 
>>
>>    [1] https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/
>>
>>> ---
>>> v3:
>>>    * call BPF_CALL_IMM() once (suggested by Yonghong)
>>>
>>> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>>>    * instead of checking the overflow in selftests, just reject
>>>      these kfunc calls directly in verifier
>>>
>>> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
>>> ---
>>>    kernel/bpf/verifier.c | 11 ++++++++++-
>>>    1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 1ae41d0cf96c..eb72e6139e2b 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env,
>>> u32 func_id, s16 offset)
>>>        struct bpf_kfunc_desc *desc;
>>>        const char *func_name;
>>>        struct btf *desc_btf;
>>> +    unsigned long call_imm;
>>>        unsigned long addr;
>>>        int err;
>>>    @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env
>>> *env, u32 func_id, s16 offset)
>>>            return -EINVAL;
>>>        }
>>>    +    call_imm = BPF_CALL_IMM(addr);
>>> +    /* Check whether or not the relative offset overflows desc->imm */
>>> +    if ((unsigned long)(s32)call_imm != call_imm) {
>>> +        verbose(env, "address of kernel function %s is out of range\n",
>>> +            func_name);
>>> +        return -EINVAL;
>>> +    }
>>> +
>>>        desc = &tab->descs[tab->nr_descs++];
>>>        desc->func_id = func_id;
>>> -    desc->imm = BPF_CALL_IMM(addr);
>>> +    desc->imm = call_imm;
>>>        desc->offset = offset;
>>>        err = btf_distill_func_proto(&env->log, desc_btf,
>>>                         func_proto, func_name,
>> .
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-02-15  6:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-09  9:11 [PATCH bpf-next v3] bpf: reject kfunc calls that overflow insn->imm Hou Tao
2022-02-09 15:42 ` Yonghong Song
2022-02-15  4:29   ` Hou Tao
2022-02-15  6:39     ` Yonghong Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).