linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
@ 2021-04-27 17:08 Brendan Jackman
  2021-04-27 21:34 ` Andrii Nakryiko
  0 siblings, 1 reply; 7+ messages in thread
From: Brendan Jackman @ 2021-04-27 17:08 UTC (permalink / raw)
  To: bpf; +Cc: ast, daniel, andrii, linux-kernel, Brendan Jackman

One of our benchmarks running in (Google-internal) CI pushes data
through the ringbuf faster than userspace is able to consume
it. In this case it seems we're actually able to get >INT_MAX entries
in a single ringbuf_buffer__consume call. ASAN detected that cnt
overflows in this case.

Fix by just setting a limit on the number of entries that can be
consumed.

Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
 tools/lib/bpf/ringbuf.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index e7a8d847161f..445a21df0934 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
 	do {
 		got_new_data = false;
 		prod_pos = smp_load_acquire(r->producer_pos);
-		while (cons_pos < prod_pos) {
+		/* Don't read more than INT_MAX, or the return vale won't make sense. */
+		while (cons_pos < prod_pos && cnt < INT_MAX) {
 			len_ptr = r->data + (cons_pos & r->mask);
 			len = smp_load_acquire(len_ptr);

--
2.31.1.498.g6c1eba8ee3d-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
  2021-04-27 17:08 [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring Brendan Jackman
@ 2021-04-27 21:34 ` Andrii Nakryiko
  2021-04-27 23:05   ` KP Singh
  0 siblings, 1 reply; 7+ messages in thread
From: Andrii Nakryiko @ 2021-04-27 21:34 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, open list

On Tue, Apr 27, 2021 at 10:09 AM Brendan Jackman <jackmanb@google.com> wrote:
>
> One of our benchmarks running in (Google-internal) CI pushes data
> through the ringbuf faster than userspace is able to consume
> it. In this case it seems we're actually able to get >INT_MAX entries
> in a single ringbuf_buffer__consume call. ASAN detected that cnt
> overflows in this case.
>
> Fix by just setting a limit on the number of entries that can be
> consumed.
>
> Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
> Signed-off-by: Brendan Jackman <jackmanb@google.com>
> ---
>  tools/lib/bpf/ringbuf.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> index e7a8d847161f..445a21df0934 100644
> --- a/tools/lib/bpf/ringbuf.c
> +++ b/tools/lib/bpf/ringbuf.c
> @@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
>         do {
>                 got_new_data = false;
>                 prod_pos = smp_load_acquire(r->producer_pos);
> -               while (cons_pos < prod_pos) {
> +               /* Don't read more than INT_MAX, or the return vale won't make sense. */
> +               while (cons_pos < prod_pos && cnt < INT_MAX) {

ring_buffer__pool() is assumed to not return until all the enqueued
messages are consumed. That's the requirement for the "adaptive"
notification scheme to work properly. So this will break that and
cause the next ring_buffer__pool() to never wake up.

We could use __u64 internally and then cap it to INT_MAX on return
maybe? But honestly, this sounds like an artificial corner case, if
you are producing data faster than you can consume it and it goes
beyond INT_MAX, something is seriously broken in your application and
you have more important things to handle :)

>                         len_ptr = r->data + (cons_pos & r->mask);
>                         len = smp_load_acquire(len_ptr);
>
> --
> 2.31.1.498.g6c1eba8ee3d-goog
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
  2021-04-27 21:34 ` Andrii Nakryiko
@ 2021-04-27 23:05   ` KP Singh
  2021-04-27 23:19     ` Andrii Nakryiko
  0 siblings, 1 reply; 7+ messages in thread
From: KP Singh @ 2021-04-27 23:05 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Brendan Jackman, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, open list

On Tue, Apr 27, 2021 at 11:34 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Apr 27, 2021 at 10:09 AM Brendan Jackman <jackmanb@google.com> wrote:
> >
> > One of our benchmarks running in (Google-internal) CI pushes data
> > through the ringbuf faster than userspace is able to consume
> > it. In this case it seems we're actually able to get >INT_MAX entries
> > in a single ringbuf_buffer__consume call. ASAN detected that cnt
> > overflows in this case.
> >
> > Fix by just setting a limit on the number of entries that can be
> > consumed.
> >
> > Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
> > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > ---
> >  tools/lib/bpf/ringbuf.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> > index e7a8d847161f..445a21df0934 100644
> > --- a/tools/lib/bpf/ringbuf.c
> > +++ b/tools/lib/bpf/ringbuf.c
> > @@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
> >         do {
> >                 got_new_data = false;
> >                 prod_pos = smp_load_acquire(r->producer_pos);
> > -               while (cons_pos < prod_pos) {
> > +               /* Don't read more than INT_MAX, or the return vale won't make sense. */
> > +               while (cons_pos < prod_pos && cnt < INT_MAX) {
>
> ring_buffer__pool() is assumed to not return until all the enqueued
> messages are consumed. That's the requirement for the "adaptive"
> notification scheme to work properly. So this will break that and
> cause the next ring_buffer__pool() to never wake up.
>
> We could use __u64 internally and then cap it to INT_MAX on return
> maybe? But honestly, this sounds like an artificial corner case, if
> you are producing data faster than you can consume it and it goes
> beyond INT_MAX, something is seriously broken in your application and

Disclaimer: I don't know what Brendan's benchmark is actually doing

That said, I have seen similar boundaries being reached when
doing process monitoring and then a kernel gets compiled (esp. with ccache)
and generates a large amount of process events in a very short span of time.
Another example is when someone runs a short process in a tight while loop.

I agree it's a matter of tuning, but since these corner cases can be
easily triggered
even on real (non CI) systems no matter how much one tunes, I wouldn't
really call it artificial :)

- KP

> you have more important things to handle :)
>
> >                         len_ptr = r->data + (cons_pos & r->mask);
> >                         len = smp_load_acquire(len_ptr);
> >
> > --
> > 2.31.1.498.g6c1eba8ee3d-goog
> >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
  2021-04-27 23:05   ` KP Singh
@ 2021-04-27 23:19     ` Andrii Nakryiko
  2021-04-28  8:18       ` Brendan Jackman
  0 siblings, 1 reply; 7+ messages in thread
From: Andrii Nakryiko @ 2021-04-27 23:19 UTC (permalink / raw)
  To: KP Singh
  Cc: Brendan Jackman, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, open list

On Tue, Apr 27, 2021 at 4:05 PM KP Singh <kpsingh@kernel.org> wrote:
>
> On Tue, Apr 27, 2021 at 11:34 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Tue, Apr 27, 2021 at 10:09 AM Brendan Jackman <jackmanb@google.com> wrote:
> > >
> > > One of our benchmarks running in (Google-internal) CI pushes data
> > > through the ringbuf faster than userspace is able to consume
> > > it. In this case it seems we're actually able to get >INT_MAX entries
> > > in a single ringbuf_buffer__consume call. ASAN detected that cnt
> > > overflows in this case.
> > >
> > > Fix by just setting a limit on the number of entries that can be
> > > consumed.
> > >
> > > Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
> > > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > > ---
> > >  tools/lib/bpf/ringbuf.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> > > index e7a8d847161f..445a21df0934 100644
> > > --- a/tools/lib/bpf/ringbuf.c
> > > +++ b/tools/lib/bpf/ringbuf.c
> > > @@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
> > >         do {
> > >                 got_new_data = false;
> > >                 prod_pos = smp_load_acquire(r->producer_pos);
> > > -               while (cons_pos < prod_pos) {
> > > +               /* Don't read more than INT_MAX, or the return vale won't make sense. */
> > > +               while (cons_pos < prod_pos && cnt < INT_MAX) {
> >
> > ring_buffer__pool() is assumed to not return until all the enqueued
> > messages are consumed. That's the requirement for the "adaptive"
> > notification scheme to work properly. So this will break that and
> > cause the next ring_buffer__pool() to never wake up.
> >
> > We could use __u64 internally and then cap it to INT_MAX on return
> > maybe? But honestly, this sounds like an artificial corner case, if
> > you are producing data faster than you can consume it and it goes
> > beyond INT_MAX, something is seriously broken in your application and
>
> Disclaimer: I don't know what Brendan's benchmark is actually doing
>
> That said, I have seen similar boundaries being reached when
> doing process monitoring and then a kernel gets compiled (esp. with ccache)
> and generates a large amount of process events in a very short span of time.
> Another example is when someone runs a short process in a tight while loop.
>
> I agree it's a matter of tuning, but since these corner cases can be
> easily triggered
> even on real (non CI) systems no matter how much one tunes, I wouldn't
> really call it artificial :)

Well of course, given sufficiently active kernel sample producer and
sufficiently slow consumer you can keep consuming forever.

I think we have two alternatives here:
1) consume all but cap return to INT_MAX
2) consume all but return long long as return result

Third alternative is to have another API with maximum number of
samples to consume. But then user needs to know what they are doing
(e.g., they do FORCE on BPF side, or they do their own epoll_wait, or
they do ring_buffer__poll with timeout = 0, etc).

I'm just not sure anyone would want to understand all the
implications. And it's easy to miss those implications. So maybe let's
do long long (or __s64) return type instead?

>
> - KP
>
> > you have more important things to handle :)
> >
> > >                         len_ptr = r->data + (cons_pos & r->mask);
> > >                         len = smp_load_acquire(len_ptr);
> > >
> > > --
> > > 2.31.1.498.g6c1eba8ee3d-goog
> > >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
  2021-04-27 23:19     ` Andrii Nakryiko
@ 2021-04-28  8:18       ` Brendan Jackman
  2021-04-28 18:13         ` Andrii Nakryiko
  0 siblings, 1 reply; 7+ messages in thread
From: Brendan Jackman @ 2021-04-28  8:18 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: KP Singh, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, open list

On Wed, 28 Apr 2021 at 01:19, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Apr 27, 2021 at 4:05 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > On Tue, Apr 27, 2021 at 11:34 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Tue, Apr 27, 2021 at 10:09 AM Brendan Jackman <jackmanb@google.com> wrote:
> > > >
> > > > One of our benchmarks running in (Google-internal) CI pushes data
> > > > through the ringbuf faster than userspace is able to consume
> > > > it. In this case it seems we're actually able to get >INT_MAX entries
> > > > in a single ringbuf_buffer__consume call. ASAN detected that cnt
> > > > overflows in this case.
> > > >
> > > > Fix by just setting a limit on the number of entries that can be
> > > > consumed.
> > > >
> > > > Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
> > > > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > > > ---
> > > >  tools/lib/bpf/ringbuf.c | 3 ++-
> > > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> > > > index e7a8d847161f..445a21df0934 100644
> > > > --- a/tools/lib/bpf/ringbuf.c
> > > > +++ b/tools/lib/bpf/ringbuf.c
> > > > @@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
> > > >         do {
> > > >                 got_new_data = false;
> > > >                 prod_pos = smp_load_acquire(r->producer_pos);
> > > > -               while (cons_pos < prod_pos) {
> > > > +               /* Don't read more than INT_MAX, or the return vale won't make sense. */
> > > > +               while (cons_pos < prod_pos && cnt < INT_MAX) {
> > >
> > > ring_buffer__pool() is assumed to not return until all the enqueued
> > > messages are consumed. That's the requirement for the "adaptive"
> > > notification scheme to work properly. So this will break that and
> > > cause the next ring_buffer__pool() to never wake up.

Ah yes, good point, thanks.

> > > We could use __u64 internally and then cap it to INT_MAX on return
> > > maybe? But honestly, this sounds like an artificial corner case, if
> > > you are producing data faster than you can consume it and it goes
> > > beyond INT_MAX, something is seriously broken in your application and

Yes it's certainly artificial but IMO it's still highly desirable for
libbpf to hold up its side of the bargain even when the application is
behaving very strangely like this.

[...]

> I think we have two alternatives here:
> 1) consume all but cap return to INT_MAX
> 2) consume all but return long long as return result
>
> Third alternative is to have another API with maximum number of
> samples to consume. But then user needs to know what they are doing
> (e.g., they do FORCE on BPF side, or they do their own epoll_wait, or
> they do ring_buffer__poll with timeout = 0, etc).
>
> I'm just not sure anyone would want to understand all the
> implications. And it's easy to miss those implications. So maybe let's
> do long long (or __s64) return type instead?

Wouldn't changing the API to 64 bit return type break existing users
on some ABIs?

I think capping the return value to INT_MAX and adding a note to the
function definition comment would also be fine, it doesn't feel like a
very complex thing for the user to understand: "Returns number of
records consumed (or INT_MAX, whichever is less)".

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
  2021-04-28  8:18       ` Brendan Jackman
@ 2021-04-28 18:13         ` Andrii Nakryiko
  2021-04-29  8:46           ` Brendan Jackman
  0 siblings, 1 reply; 7+ messages in thread
From: Andrii Nakryiko @ 2021-04-28 18:13 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: KP Singh, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, open list

On Wed, Apr 28, 2021 at 1:18 AM Brendan Jackman <jackmanb@google.com> wrote:
>
> On Wed, 28 Apr 2021 at 01:19, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > On Tue, Apr 27, 2021 at 4:05 PM KP Singh <kpsingh@kernel.org> wrote:
> > >
> > > On Tue, Apr 27, 2021 at 11:34 PM Andrii Nakryiko
> > > <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > On Tue, Apr 27, 2021 at 10:09 AM Brendan Jackman <jackmanb@google.com> wrote:
> > > > >
> > > > > One of our benchmarks running in (Google-internal) CI pushes data
> > > > > through the ringbuf faster than userspace is able to consume
> > > > > it. In this case it seems we're actually able to get >INT_MAX entries
> > > > > in a single ringbuf_buffer__consume call. ASAN detected that cnt
> > > > > overflows in this case.
> > > > >
> > > > > Fix by just setting a limit on the number of entries that can be
> > > > > consumed.
> > > > >
> > > > > Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
> > > > > Signed-off-by: Brendan Jackman <jackmanb@google.com>
> > > > > ---
> > > > >  tools/lib/bpf/ringbuf.c | 3 ++-
> > > > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> > > > > index e7a8d847161f..445a21df0934 100644
> > > > > --- a/tools/lib/bpf/ringbuf.c
> > > > > +++ b/tools/lib/bpf/ringbuf.c
> > > > > @@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
> > > > >         do {
> > > > >                 got_new_data = false;
> > > > >                 prod_pos = smp_load_acquire(r->producer_pos);
> > > > > -               while (cons_pos < prod_pos) {
> > > > > +               /* Don't read more than INT_MAX, or the return vale won't make sense. */
> > > > > +               while (cons_pos < prod_pos && cnt < INT_MAX) {
> > > >
> > > > ring_buffer__pool() is assumed to not return until all the enqueued
> > > > messages are consumed. That's the requirement for the "adaptive"
> > > > notification scheme to work properly. So this will break that and
> > > > cause the next ring_buffer__pool() to never wake up.
>
> Ah yes, good point, thanks.
>
> > > > We could use __u64 internally and then cap it to INT_MAX on return
> > > > maybe? But honestly, this sounds like an artificial corner case, if
> > > > you are producing data faster than you can consume it and it goes
> > > > beyond INT_MAX, something is seriously broken in your application and
>
> Yes it's certainly artificial but IMO it's still highly desirable for
> libbpf to hold up its side of the bargain even when the application is
> behaving very strangely like this.

One can also argue that if application consumed more than 2 billion
messages in one go, that's an error. ;-P But of course that is not
great.

>
> [...]
>
> > I think we have two alternatives here:
> > 1) consume all but cap return to INT_MAX
> > 2) consume all but return long long as return result
> >
> > Third alternative is to have another API with maximum number of
> > samples to consume. But then user needs to know what they are doing
> > (e.g., they do FORCE on BPF side, or they do their own epoll_wait, or
> > they do ring_buffer__poll with timeout = 0, etc).
> >
> > I'm just not sure anyone would want to understand all the
> > implications. And it's easy to miss those implications. So maybe let's
> > do long long (or __s64) return type instead?
>
> Wouldn't changing the API to 64 bit return type break existing users
> on some ABIs?
>

Yes, it might, not perfect.

> I think capping the return value to INT_MAX and adding a note to the
> function definition comment would also be fine, it doesn't feel like a
> very complex thing for the user to understand: "Returns number of
> records consumed (or INT_MAX, whichever is less)".

Yep, let's cap. But to not penalize a hot loop with extra checks.
Let's use int64_t internally for counting and only cap it before the
return.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
  2021-04-28 18:13         ` Andrii Nakryiko
@ 2021-04-29  8:46           ` Brendan Jackman
  0 siblings, 0 replies; 7+ messages in thread
From: Brendan Jackman @ 2021-04-29  8:46 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: KP Singh, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, open list

On Wed, 28 Apr 2021 at 20:13, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
...
>
> Yep, let's cap. But to not penalize a hot loop with extra checks.
> Let's use int64_t internally for counting and only cap it before the
> return.

Cool, sounds good. Patch incoming but I'm on interrupts duty this week
so might not be today/tomorrow.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-04-29  8:46 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-27 17:08 [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring Brendan Jackman
2021-04-27 21:34 ` Andrii Nakryiko
2021-04-27 23:05   ` KP Singh
2021-04-27 23:19     ` Andrii Nakryiko
2021-04-28  8:18       ` Brendan Jackman
2021-04-28 18:13         ` Andrii Nakryiko
2021-04-29  8:46           ` Brendan Jackman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).