bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: Daniel Borkmann <daniel@iogearbox.net>
Cc: Andrii Nakryiko <andriin@fb.com>, bpf <bpf@vger.kernel.org>,
	Networking <netdev@vger.kernel.org>,
	Alexei Starovoitov <ast@fb.com>, Kernel Team <kernel-team@fb.com>
Subject: Re: [PATCH bpf-next 0/4] Fix perf_buffer creation on systems with offline CPUs
Date: Mon, 16 Dec 2019 09:59:33 -0800	[thread overview]
Message-ID: <CAEf4BzYhmFvhL_DgeXK8xxihcxcguRzox2AXpjBS1BB4n9d7rQ@mail.gmail.com> (raw)
In-Reply-To: <20191216144404.GG14887@linux.fritz.box>

On Mon, Dec 16, 2019 at 6:44 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On Wed, Dec 11, 2019 at 05:35:20PM -0800, Andrii Nakryiko wrote:
> > This patch set fixes perf_buffer__new() behavior on systems which have some of
> > the CPUs offline/missing (due to difference between "possible" and "online"
> > sets). perf_buffer will create per-CPU buffer and open/attach to corresponding
> > perf_event only on CPUs present and online at the moment of perf_buffer
> > creation. Without this logic, perf_buffer creation has no chances of
> > succeeding on such systems, preventing valid and correct BPF applications from
> > starting.
>
> Once CPU goes back online and processes BPF events, any attempt to push into
> perf RB via bpf_perf_event_output() with flag BPF_F_CURRENT_CPU would silently

bpf_perf_event_output() will return error code in such case, so it's
not exactly undetectable by application.


> get discarded. Should rather perf API be fixed instead of plain skipping as done
> here to at least allow creation of ring buffer for BPF to avoid such case?

Can you elaborate on what perf API fix you have in mind? Do you mean
for perf to allow attaching ring buffer to offline CPU or something
else?

>
> > Andrii Nakryiko (4):
> >   libbpf: extract and generalize CPU mask parsing logic
> >   selftests/bpf: add CPU mask parsing tests
> >   libbpf: don't attach perf_buffer to offline/missing CPUs
> >   selftests/bpf: fix perf_buffer test on systems w/ offline CPUs
> >
> >  tools/lib/bpf/libbpf.c                        | 157 ++++++++++++------
> >  tools/lib/bpf/libbpf_internal.h               |   2 +
> >  .../selftests/bpf/prog_tests/cpu_mask.c       |  78 +++++++++
> >  .../selftests/bpf/prog_tests/perf_buffer.c    |  29 +++-
> >  4 files changed, 213 insertions(+), 53 deletions(-)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/cpu_mask.c
> >
> > --
> > 2.17.1
> >

  reply	other threads:[~2019-12-16 18:43 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-12  1:35 [PATCH bpf-next 0/4] Fix perf_buffer creation on systems with offline CPUs Andrii Nakryiko
2019-12-13 21:04 ` Alexei Starovoitov
2019-12-16 14:44 ` Daniel Borkmann
2019-12-16 17:59   ` Andrii Nakryiko [this message]
2019-12-17 13:00     ` Daniel Borkmann
2019-12-20 17:46       ` Andrii Nakryiko
2020-02-09 17:18 ` Naresh Kamboju
2020-02-09 18:32   ` Andrii Nakryiko
2020-02-09 21:03     ` Greg Kroah-Hartman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEf4BzYhmFvhL_DgeXK8xxihcxcguRzox2AXpjBS1BB4n9d7rQ@mail.gmail.com \
    --to=andrii.nakryiko@gmail.com \
    --cc=andriin@fb.com \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kernel-team@fb.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).