bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin KaFai Lau <kafai@fb.com>
To: Stanislav Fomichev <sdf@google.com>
Cc: <netdev@vger.kernel.org>, <bpf@vger.kernel.org>, <ast@kernel.org>,
	<daniel@iogearbox.net>, <andrii@kernel.org>
Subject: Re: [PATCH bpf-next] bpf: increase supported cgroup storage value size
Date: Fri, 23 Jul 2021 15:39:39 -0700	[thread overview]
Message-ID: <20210723223939.fr45rzktocvg5usw@kafai-mbp.dhcp.thefacebook.com> (raw)
In-Reply-To: <20210723002747.3668098-1-sdf@google.com>

On Thu, Jul 22, 2021 at 05:27:47PM -0700, Stanislav Fomichev wrote:
> Current max cgroup storage value size is 4k (PAGE_SIZE). The other local
> storages accept up to 64k (BPF_LOCAL_STORAGE_MAX_VALUE_SIZE). Let's align
> max cgroup value size with the other storages.
> 
> For percpu, the max is 32k (PCPU_MIN_UNIT_SIZE) because percpu
> allocator is not happy about larger values.
> 
> netcnt test is extended to exercise those maximum values
> (non-percpu max size is close to, but not real max).
> 
> Signed-off-by: Stanislav Fomichev <sdf@google.com>
> ---
>  kernel/bpf/local_storage.c                    | 12 +++++-
>  tools/testing/selftests/bpf/netcnt_common.h   | 38 +++++++++++++++----
>  .../testing/selftests/bpf/progs/netcnt_prog.c | 29 +++++++-------
>  tools/testing/selftests/bpf/test_netcnt.c     | 25 +++++++-----
>  4 files changed, 73 insertions(+), 31 deletions(-)
> 
> diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
> index 7ed2a14dc0de..a276da74c20a 100644
> --- a/kernel/bpf/local_storage.c
> +++ b/kernel/bpf/local_storage.c
> @@ -1,6 +1,7 @@
>  //SPDX-License-Identifier: GPL-2.0
>  #include <linux/bpf-cgroup.h>
>  #include <linux/bpf.h>
> +#include <linux/bpf_local_storage.h>
>  #include <linux/btf.h>
>  #include <linux/bug.h>
>  #include <linux/filter.h>
> @@ -284,8 +285,17 @@ static int cgroup_storage_get_next_key(struct bpf_map *_map, void *key,
>  static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
>  {
>  	int numa_node = bpf_map_attr_numa_node(attr);
> +	__u32 max_value_size = PCPU_MIN_UNIT_SIZE;
>  	struct bpf_cgroup_storage_map *map;
>  
> +	/* percpu is bound by PCPU_MIN_UNIT_SIZE, non-percu
> +	 * is the same as other local storages.
> +	 */
> +	if (attr->map_type == BPF_MAP_TYPE_CGROUP_STORAGE)
> +		max_value_size = BPF_LOCAL_STORAGE_MAX_VALUE_SIZE;
> +
> +	BUILD_BUG_ON(PCPU_MIN_UNIT_SIZE > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE);
If PCPU_MIN_UNIT_SIZE did become larger, I assume it would be bounded by
BPF_LOCAL_STORAGE_MAX_VALUE_SIZE again?

Instead of BUILD_BUG_ON, how about a min_t here:

	if (attr->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE)
		max_value_size = min_t(__u32,
					BPF_LOCAL_STORAGE_MAX_VALUE_SIZE,
					PCPU_MIN_UNIT_SIZE);

  reply	other threads:[~2021-07-23 22:40 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-23  0:27 [PATCH bpf-next] bpf: increase supported cgroup storage value size Stanislav Fomichev
2021-07-23 22:39 ` Martin KaFai Lau [this message]
2021-07-23 23:23   ` sdf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210723223939.fr45rzktocvg5usw@kafai-mbp.dhcp.thefacebook.com \
    --to=kafai@fb.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=netdev@vger.kernel.org \
    --cc=sdf@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).