bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Borkmann <daniel@iogearbox.net>
To: Andrii Nakryiko <andriin@fb.com>,
	bpf@vger.kernel.org, netdev@vger.kernel.org, ast@fb.com
Cc: andrii.nakryiko@gmail.com, kernel-team@fb.com
Subject: Re: [PATCH bpf] libbpf: count present CPUs, not theoretically possible
Date: Mon, 30 Sep 2019 10:32:06 +0200	[thread overview]
Message-ID: <0b70df6a-28fd-e139-d72c-d4d88e9bc7b7@iogearbox.net> (raw)
In-Reply-To: <20190928063033.1674094-1-andriin@fb.com>

On 9/28/19 8:30 AM, Andrii Nakryiko wrote:
> This patch switches libbpf_num_possible_cpus() from using possible CPU
> set to present CPU set. This fixes issues with incorrect auto-sizing of
> PERF_EVENT_ARRAY map on HOTPLUG-enabled systems.

Those issues should be described in more detail here in the changelog,
otherwise noone knows what is meant exactly when glancing at the git log.

> On HOTPLUG enabled systems, /sys/devices/system/cpu/possible is going to
> be a set of any representable (i.e., potentially possible) CPU, which is
> normally way higher than real amount of CPUs (e.g., 0-127 on VM I've
> tested on, while there were just two CPU cores actually present).
> /sys/devices/system/cpu/present, on the other hand, will only contain
> CPUs that are physically present in the system (even if not online yet),
> which is what we really want, especially when creating per-CPU maps or
> perf events.
> 
> On systems with HOTPLUG disabled, present and possible are identical, so
> there is no change of behavior there.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>   tools/lib/bpf/libbpf.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index e0276520171b..45351c074e45 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -5899,7 +5899,7 @@ void bpf_program__bpil_offs_to_addr(struct bpf_prog_info_linear *info_linear)
>   
>   int libbpf_num_possible_cpus(void)
>   {
> -	static const char *fcpu = "/sys/devices/system/cpu/possible";
> +	static const char *fcpu = "/sys/devices/system/cpu/present";

Problem is that this is going to break things *badly* for per-cpu maps as
BPF_DECLARE_PERCPU() relies on possible CPUs, not present ones. And given
present<=possible you'll end up corrupting user space when you do a lookup
on the map since kernel side operates on possible as well.

>   	int len = 0, n = 0, il = 0, ir = 0;
>   	unsigned int start = 0, end = 0;
>   	int tmp_cpus = 0;
> 


  parent reply	other threads:[~2019-09-30  8:32 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-28  6:30 [PATCH bpf] libbpf: count present CPUs, not theoretically possible Andrii Nakryiko
2019-09-28 11:20 ` Alan Maguire
2019-09-28 16:31   ` Andrii Nakryiko
2019-09-28 17:46     ` Alan Maguire
2019-09-30  6:06 ` Song Liu
2019-09-30 16:22   ` Andrii Nakryiko
2019-09-30  8:32 ` Daniel Borkmann [this message]
2019-09-30 16:26   ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0b70df6a-28fd-e139-d72c-d4d88e9bc7b7@iogearbox.net \
    --to=daniel@iogearbox.net \
    --cc=andrii.nakryiko@gmail.com \
    --cc=andriin@fb.com \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=kernel-team@fb.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).