bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf v1] bpf: Fix undefined behavior in narrow load handling
@ 2019-05-08 16:08 Krzesimir Nowak
  2019-05-09 21:30 ` Daniel Borkmann
  0 siblings, 1 reply; 4+ messages in thread
From: Krzesimir Nowak @ 2019-05-08 16:08 UTC (permalink / raw)
  To: bpf
  Cc: Krzesimir Nowak, Alban Crequy, Iago López Galeiras,
	Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, netdev, linux-kernel

Commit 31fd85816dbe ("bpf: permits narrower load from bpf program
context fields") made the verifier add AND instructions to clear the
unwanted bits with a mask when doing a narrow load. The mask is
computed with

(1 << size * 8) - 1

where "size" is the size of the narrow load. When doing a 4 byte load
of a an 8 byte field the verifier shifts the literal 1 by 32 places to
the left. This results in an overflow of a signed integer, which is an
undefined behavior. Typically the computed mask was zero, so the
result of the narrow load ended up being zero too.

Cast the literal to long long to avoid overflows. Note that narrow
load of the 4 byte fields does not have the undefined behavior,
because the load size can only be either 1 or 2 bytes, so shifting 1
by 8 or 16 places will not overflow it. And reading 4 bytes would not
be a narrow load of a 4 bytes field.

Reviewed-by: Alban Crequy <alban@kinvolk.io>
Reviewed-by: Iago López Galeiras <iago@kinvolk.io>
Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
Cc: Yonghong Song <yhs@fb.com>
Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 kernel/bpf/verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 09d5d972c9ff..950fac024fbb 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
 									insn->dst_reg,
 									shift);
 				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
-								(1 << size * 8) - 1);
+								(1ULL << size * 8) - 1);
 			}
 		}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf v1] bpf: Fix undefined behavior in narrow load handling
  2019-05-08 16:08 [PATCH bpf v1] bpf: Fix undefined behavior in narrow load handling Krzesimir Nowak
@ 2019-05-09 21:30 ` Daniel Borkmann
  2019-05-10 10:16   ` Krzesimir Nowak
  0 siblings, 1 reply; 4+ messages in thread
From: Daniel Borkmann @ 2019-05-09 21:30 UTC (permalink / raw)
  To: Krzesimir Nowak, bpf
  Cc: Alban Crequy, Iago López Galeiras, Yonghong Song,
	Alexei Starovoitov, Martin KaFai Lau, Song Liu, netdev,
	linux-kernel

On 05/08/2019 06:08 PM, Krzesimir Nowak wrote:
> Commit 31fd85816dbe ("bpf: permits narrower load from bpf program
> context fields") made the verifier add AND instructions to clear the
> unwanted bits with a mask when doing a narrow load. The mask is
> computed with
> 
> (1 << size * 8) - 1
> 
> where "size" is the size of the narrow load. When doing a 4 byte load
> of a an 8 byte field the verifier shifts the literal 1 by 32 places to
> the left. This results in an overflow of a signed integer, which is an
> undefined behavior. Typically the computed mask was zero, so the
> result of the narrow load ended up being zero too.
> 
> Cast the literal to long long to avoid overflows. Note that narrow
> load of the 4 byte fields does not have the undefined behavior,
> because the load size can only be either 1 or 2 bytes, so shifting 1
> by 8 or 16 places will not overflow it. And reading 4 bytes would not
> be a narrow load of a 4 bytes field.
> 
> Reviewed-by: Alban Crequy <alban@kinvolk.io>
> Reviewed-by: Iago López Galeiras <iago@kinvolk.io>
> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
> Cc: Yonghong Song <yhs@fb.com>
> Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> ---
>  kernel/bpf/verifier.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 09d5d972c9ff..950fac024fbb 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
>  									insn->dst_reg,
>  									shift);
>  				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
> -								(1 << size * 8) - 1);
> +								(1ULL << size * 8) - 1);
>  			}

Makes sense, good catch & thanks for the fix!

Could you also add a test case to test_verifier.c so we keep track of this?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf v1] bpf: Fix undefined behavior in narrow load handling
  2019-05-09 21:30 ` Daniel Borkmann
@ 2019-05-10 10:16   ` Krzesimir Nowak
  2019-05-13  0:01     ` Daniel Borkmann
  0 siblings, 1 reply; 4+ messages in thread
From: Krzesimir Nowak @ 2019-05-10 10:16 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: bpf, Alban Crequy, Iago López Galeiras, Yonghong Song,
	Alexei Starovoitov, Martin KaFai Lau, Song Liu, netdev,
	linux-kernel

On Thu, May 9, 2019 at 11:30 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 05/08/2019 06:08 PM, Krzesimir Nowak wrote:
> > Commit 31fd85816dbe ("bpf: permits narrower load from bpf program
> > context fields") made the verifier add AND instructions to clear the
> > unwanted bits with a mask when doing a narrow load. The mask is
> > computed with
> >
> > (1 << size * 8) - 1
> >
> > where "size" is the size of the narrow load. When doing a 4 byte load
> > of a an 8 byte field the verifier shifts the literal 1 by 32 places to
> > the left. This results in an overflow of a signed integer, which is an
> > undefined behavior. Typically the computed mask was zero, so the
> > result of the narrow load ended up being zero too.
> >
> > Cast the literal to long long to avoid overflows. Note that narrow
> > load of the 4 byte fields does not have the undefined behavior,
> > because the load size can only be either 1 or 2 bytes, so shifting 1
> > by 8 or 16 places will not overflow it. And reading 4 bytes would not
> > be a narrow load of a 4 bytes field.
> >
> > Reviewed-by: Alban Crequy <alban@kinvolk.io>
> > Reviewed-by: Iago López Galeiras <iago@kinvolk.io>
> > Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
> > Cc: Yonghong Song <yhs@fb.com>
> > Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> > ---
> >  kernel/bpf/verifier.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 09d5d972c9ff..950fac024fbb 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
> >                                                                       insn->dst_reg,
> >                                                                       shift);
> >                               insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
> > -                                                             (1 << size * 8) - 1);
> > +                                                             (1ULL << size * 8) - 1);
> >                       }
>
> Makes sense, good catch & thanks for the fix!
>
> Could you also add a test case to test_verifier.c so we keep track of this?
>
> Thanks,
> Daniel

Hi,

A test for it is a bit tricky. I only found two 64bit fields that can
be loaded narrowly - `sample_period` and `addr` in `struct
bpf_perf_event_data`, so in theory I could have a test like follows:

{
    "32bit loads of a 64bit field (both least and most significant words)",
    .insns = {
    BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct
bpf_perf_event_data, addr)),
    BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct
bpf_perf_event_data, addr) + 4),
    BPF_MOV64_IMM(BPF_REG_0, 0),
    BPF_EXIT_INSN(),
    },
    .result = ACCEPT,
    .prog_type = BPF_PROG_TYPE_PERF_EVENT,
},

The test like this would check that the program is not rejected, but
it wasn't an issue. The test does not check if the verifier has
transformed the narrow reads properly. Ideally the BPF program would
do something like this:

/* let's assume that low and high variables get their values from narrow load */
__u64 low = (__u32)perf_event->addr;
__u64 high = (__u32)(perf_event->addr >> 32);
__u64 addr = low | (high << 32);

return addr != perf_event->addr;

But the test_verifier.c won't be able to run this, because
BPF_PROG_TYPE_PERF_EVENT programs are not supported by the
bpf_test_run_prog function.

Any hints how to proceed here?

Cheers,
Krzesimir
-- 
Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
Registergericht/Court of registration: Amtsgericht Charlottenburg
Registernummer/Registration number: HRB 171414 B
Ust-ID-Nummer/VAT ID number: DE302207000

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf v1] bpf: Fix undefined behavior in narrow load handling
  2019-05-10 10:16   ` Krzesimir Nowak
@ 2019-05-13  0:01     ` Daniel Borkmann
  0 siblings, 0 replies; 4+ messages in thread
From: Daniel Borkmann @ 2019-05-13  0:01 UTC (permalink / raw)
  To: Krzesimir Nowak
  Cc: bpf, Alban Crequy, Iago López Galeiras, Yonghong Song,
	Alexei Starovoitov, Martin KaFai Lau, Song Liu, netdev,
	linux-kernel

On 05/10/2019 12:16 PM, Krzesimir Nowak wrote:
> On Thu, May 9, 2019 at 11:30 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 05/08/2019 06:08 PM, Krzesimir Nowak wrote:
>>> Commit 31fd85816dbe ("bpf: permits narrower load from bpf program
>>> context fields") made the verifier add AND instructions to clear the
>>> unwanted bits with a mask when doing a narrow load. The mask is
>>> computed with
>>>
>>> (1 << size * 8) - 1
>>>
>>> where "size" is the size of the narrow load. When doing a 4 byte load
>>> of a an 8 byte field the verifier shifts the literal 1 by 32 places to
>>> the left. This results in an overflow of a signed integer, which is an
>>> undefined behavior. Typically the computed mask was zero, so the
>>> result of the narrow load ended up being zero too.
>>>
>>> Cast the literal to long long to avoid overflows. Note that narrow
>>> load of the 4 byte fields does not have the undefined behavior,
>>> because the load size can only be either 1 or 2 bytes, so shifting 1
>>> by 8 or 16 places will not overflow it. And reading 4 bytes would not
>>> be a narrow load of a 4 bytes field.
>>>
>>> Reviewed-by: Alban Crequy <alban@kinvolk.io>
>>> Reviewed-by: Iago López Galeiras <iago@kinvolk.io>
>>> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
>>> Cc: Yonghong Song <yhs@fb.com>
>>> Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
>>> ---
>>>  kernel/bpf/verifier.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 09d5d972c9ff..950fac024fbb 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
>>>                                                                       insn->dst_reg,
>>>                                                                       shift);
>>>                               insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
>>> -                                                             (1 << size * 8) - 1);
>>> +                                                             (1ULL << size * 8) - 1);
>>>                       }
>>
>> Makes sense, good catch & thanks for the fix!
>>
>> Could you also add a test case to test_verifier.c so we keep track of this?
>>
>> Thanks,
>> Daniel
> 
> Hi,
> 
> A test for it is a bit tricky. I only found two 64bit fields that can
> be loaded narrowly - `sample_period` and `addr` in `struct
> bpf_perf_event_data`, so in theory I could have a test like follows:
> 
> {
>     "32bit loads of a 64bit field (both least and most significant words)",
>     .insns = {
>     BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct
> bpf_perf_event_data, addr)),
>     BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct
> bpf_perf_event_data, addr) + 4),
>     BPF_MOV64_IMM(BPF_REG_0, 0),
>     BPF_EXIT_INSN(),
>     },
>     .result = ACCEPT,
>     .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> },
> 
> The test like this would check that the program is not rejected, but
> it wasn't an issue. The test does not check if the verifier has
> transformed the narrow reads properly. Ideally the BPF program would
> do something like this:
> 
> /* let's assume that low and high variables get their values from narrow load */
> __u64 low = (__u32)perf_event->addr;
> __u64 high = (__u32)(perf_event->addr >> 32);
> __u64 addr = low | (high << 32);
> 
> return addr != perf_event->addr;
> 
> But the test_verifier.c won't be able to run this, because
> BPF_PROG_TYPE_PERF_EVENT programs are not supported by the
> bpf_test_run_prog function.
> 
> Any hints how to proceed here?

The test_verifier actually also runs the programs after successful verification,
so above C-like snippet should be converted to BPF asm. Search for ".retval" in
some of the test cases. (I've for now applied the fix itself to bpf, but still
expect such test case as follow-up for same tree. Thanks!)

> Cheers,
> Krzesimir
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-05-13  0:01 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-08 16:08 [PATCH bpf v1] bpf: Fix undefined behavior in narrow load handling Krzesimir Nowak
2019-05-09 21:30 ` Daniel Borkmann
2019-05-10 10:16   ` Krzesimir Nowak
2019-05-13  0:01     ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).