bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: Daniel Borkmann <daniel@iogearbox.net>
Cc: Andrii Nakryiko <andriin@fb.com>, bpf <bpf@vger.kernel.org>,
	Networking <netdev@vger.kernel.org>,
	Alexei Starovoitov <ast@fb.com>, Kernel Team <kernel-team@fb.com>
Subject: Re: [PATCH bpf-next 0/4] Fix perf_buffer creation on systems with offline CPUs
Date: Fri, 20 Dec 2019 09:46:59 -0800	[thread overview]
Message-ID: <CAEf4Bza2kyMQLiDnkzDi-82xShEiUY2zrre=MJdedZet4g=o7A@mail.gmail.com> (raw)
In-Reply-To: <dfb31c60-3c8c-94a2-5302-569096428e9b@iogearbox.net>

On Tue, Dec 17, 2019 at 5:00 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 12/16/19 6:59 PM, Andrii Nakryiko wrote:
> > On Mon, Dec 16, 2019 at 6:44 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >> On Wed, Dec 11, 2019 at 05:35:20PM -0800, Andrii Nakryiko wrote:
> >>> This patch set fixes perf_buffer__new() behavior on systems which have some of
> >>> the CPUs offline/missing (due to difference between "possible" and "online"
> >>> sets). perf_buffer will create per-CPU buffer and open/attach to corresponding
> >>> perf_event only on CPUs present and online at the moment of perf_buffer
> >>> creation. Without this logic, perf_buffer creation has no chances of
> >>> succeeding on such systems, preventing valid and correct BPF applications from
> >>> starting.
> >>
> >> Once CPU goes back online and processes BPF events, any attempt to push into
> >> perf RB via bpf_perf_event_output() with flag BPF_F_CURRENT_CPU would silently
> >
> > bpf_perf_event_output() will return error code in such case, so it's
> > not exactly undetectable by application.
>
> Yeah, true, given there would be no element in the perf map at that slot, the
> program would receive -ENOENT and we could account for missed events via per
> CPU map or such.
>
> >> get discarded. Should rather perf API be fixed instead of plain skipping as done
> >> here to at least allow creation of ring buffer for BPF to avoid such case?
> >
> > Can you elaborate on what perf API fix you have in mind? Do you mean
> > for perf to allow attaching ring buffer to offline CPU or something
> > else?
>
> Yes, was wondering about the former, meaning, possibility to attach ring buffer
> to offline CPU.

This sounds like a more heavy-weight fix, I'll put it on backburner
for now and will look at perf code when I get a chance to see if/how
it's possible.

>
> >>> Andrii Nakryiko (4):
> >>>    libbpf: extract and generalize CPU mask parsing logic
> >>>    selftests/bpf: add CPU mask parsing tests
> >>>    libbpf: don't attach perf_buffer to offline/missing CPUs
> >>>    selftests/bpf: fix perf_buffer test on systems w/ offline CPUs
> >>>
> >>>   tools/lib/bpf/libbpf.c                        | 157 ++++++++++++------
> >>>   tools/lib/bpf/libbpf_internal.h               |   2 +
> >>>   .../selftests/bpf/prog_tests/cpu_mask.c       |  78 +++++++++
> >>>   .../selftests/bpf/prog_tests/perf_buffer.c    |  29 +++-
> >>>   4 files changed, 213 insertions(+), 53 deletions(-)
> >>>   create mode 100644 tools/testing/selftests/bpf/prog_tests/cpu_mask.c
> >>>
> >>> --
> >>> 2.17.1
> >>>
>

  reply	other threads:[~2019-12-20 17:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-12  1:35 [PATCH bpf-next 0/4] Fix perf_buffer creation on systems with offline CPUs Andrii Nakryiko
2019-12-13 21:04 ` Alexei Starovoitov
2019-12-16 14:44 ` Daniel Borkmann
2019-12-16 17:59   ` Andrii Nakryiko
2019-12-17 13:00     ` Daniel Borkmann
2019-12-20 17:46       ` Andrii Nakryiko [this message]
2020-02-09 17:18 ` Naresh Kamboju
2020-02-09 18:32   ` Andrii Nakryiko
2020-02-09 21:03     ` Greg Kroah-Hartman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAEf4Bza2kyMQLiDnkzDi-82xShEiUY2zrre=MJdedZet4g=o7A@mail.gmail.com' \
    --to=andrii.nakryiko@gmail.com \
    --cc=andriin@fb.com \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kernel-team@fb.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).