bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Gilad Reti <gilad.reti@gmail.com>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: bpf <bpf@vger.kernel.org>, Andrii Nakryiko <andrii@kernel.org>,
	assaf.piltzer@cyberark.com
Subject: Re: libbpf ringbuf manager starvation
Date: Thu, 21 Jan 2021 23:42:25 +0200	[thread overview]
Message-ID: <CANaYP3E5L_Tw3Ra3KDBZr27wr9JAb=KbyGAuwBHDPoKMBHRbQg@mail.gmail.com> (raw)
In-Reply-To: <CAEf4Bzbd-_6m=u9m32c0-hZA=JMkNEC2yWgcs_02Nv4fxxmpfQ@mail.gmail.com>

On Thu, Jan 21, 2021 at 9:29 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Jan 19, 2021 at 7:51 AM Gilad Reti <gilad.reti@gmail.com> wrote:
> >
> > Hello there,
> >
>
> Hi,
>
> > When playing with the (relatively) new ringbuf api we encountered
> > something that we believe can be an interesting usecase.
> > When registering multiple rinbufs to the same ringbuf manager, one of
> > which is highly active, other ringbufs may starve. Since libbpf
> > (e)polls on all the managed ringbufs at once and then tries to read
> > *as many samples as it can* from ready ringbufs, it may get stuck
> > indefinitely on one of them, not being able to process the other.
> > We know that the current ringbuf api exposes the epoll_fd so that one
> > can implement the epoll logic on his own, but this sounds to us like a
> > not so advanced usecase that may be worth taking care of specifically.
> > Does allowing to specify a maximum number of samples to consume sounds
> > like a reasonable addition to the ringbuf api?
>
> Did you actually run into such a situation in practice? If you have a
> BPF program producing so much data so fast that user-space can't keep
> up, then it sounds like a suboptimal use case for BPF ringbuf.

Yes, we have ran into such a situation. Our userspace is far from
performance-optimal, but currently that is the best we have.


>
> But nevertheless, my advice for you situation is to use two instances
> of libbpf's ring_buffer: one for super-busy ringbuf, and another for
> everything else. Or you can even have one for each. It's very
> flexible.
>

Yes, that what we are doing currently as a workaround. thanks.


> As for having this limit, it's not so simple, unfortunately. The
> contract between kernel, epoll, and libbpf is that user-space will
> always consume all the items until it runs out of more items to
> consume. Internally in kernel BPF ringbuf relies on that to skip
> unnecessary epoll notifications. If you consume not all items and will
> attempt to (e)poll again, you'll never get another notification
> (unless you force-notify from your BPF program, that's an advanced use
> case).
>
> We could do a round-robin across all registered ringbufs within the
> ring_buffer instance in ring_buffer__poll()/ring_buffer__consume(),
> but I think it's over-designing for a quite unusual case.
>

Yes, I agree it is not worth redesigning the entire ringbuf processing
implementation for this usecase. but we thought adding another parameter
will be simpler - thanks for clarifying the difficulties.

>
> >
> > Thanks

      reply	other threads:[~2021-01-21 21:44 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-19 15:51 libbpf ringbuf manager starvation Gilad Reti
2021-01-21  7:29 ` Andrii Nakryiko
2021-01-21 21:42   ` Gilad Reti [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANaYP3E5L_Tw3Ra3KDBZr27wr9JAb=KbyGAuwBHDPoKMBHRbQg@mail.gmail.com' \
    --to=gilad.reti@gmail.com \
    --cc=andrii.nakryiko@gmail.com \
    --cc=andrii@kernel.org \
    --cc=assaf.piltzer@cyberark.com \
    --cc=bpf@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).