bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: bpf <bpf@vger.kernel.org>, Andrii Nakryiko <andrii@kernel.org>,
	Jiri Olsa <jolsa@redhat.com>
Subject: Re: [PATCH v2 4/4] selftests/bpf: ringbuf, mmap: bump up page size to 64K
Date: Wed, 31 Mar 2021 09:11:34 +0300	[thread overview]
Message-ID: <CANoWswkx1zNy1fbCkgC6h8f21EPKTg15oezjtLsZ3eN6pEf2Ng@mail.gmail.com> (raw)
In-Reply-To: <CAEf4BzbKfz7if1ktSMiyK4TZYZF8n7mk34UQCi3ZuDZvobkZqQ@mail.gmail.com>

Hi, Andrii,

On Wed, Mar 31, 2021 at 8:49 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Mon, Mar 29, 2021 at 8:20 AM Yauheni Kaliuta
> <yauheni.kaliuta@redhat.com> wrote:
> >
> > On Sun, Mar 28, 2021 at 8:03 AM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> >
> > [...]
> >
> > > >
> > > >  struct {
> > > >         __uint(type, BPF_MAP_TYPE_ARRAY);
> > > > -       __uint(max_entries, 4096);
> > > > +       __uint(max_entries, PAGE_SIZE);
> > >
> > >
> > > so you can set map size at runtime before bpf_object__load (or
> > > skeleton's load) with bpf_map__set_max_entries. That way you don't
> > > have to do any assumptions. Just omit max_entries in BPF source code,
> > > and always set it in userspace.
> >
> > Will it work for ringbuf_multi? If I just set max_entries for ringbuf1
> > and ringbuf2 that way, it gives me
> >
> > libbpf: map 'ringbuf_arr': failed to create inner map: -22
> > libbpf: map 'ringbuf_arr': failed to create: Invalid argument(-22)
> > libbpf: failed to load object 'test_ringbuf_multi'
> > libbpf: failed to load BPF skeleton 'test_ringbuf_multi': -22
> > test_ringbuf_multi:FAIL:skel_load skeleton load failed
> >
>
> You are right, it won't work. We'd need to add something like
> bpf_map__inner_map() accessor to allow to adjust the inner map
> definition:
>
> bpf_map__set_max_entries(bpf_map__inner_map(skel->maps.ringbuf_arr), page_size);

Thanks!

On top on that, for some reason simple ringbuf_multi (converted to use
dynamic size) does not work on my 64K page configuration too, haven't
investigated why. Works on x86 4K page.

>
> And some more fixes. Here's minimal diff that made it work, but
> probably needs a bit more testing:

Thanks again.
I could send the patchset with mmap only converted and just increase
ringbuf size since it's not selftests only change, but requires libbpf
improvements.

Or you would prefer to change them all together?



> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 7aad78dbb4b4..ed5586cce227 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -2194,6 +2194,7 @@ static int parse_btf_map_def(struct bpf_object *obj,
>              map->inner_map = calloc(1, sizeof(*map->inner_map));
>              if (!map->inner_map)
>                  return -ENOMEM;
> +            map->inner_map->fd = -1;
>              map->inner_map->sec_idx = obj->efile.btf_maps_shndx;
>              map->inner_map->name = malloc(strlen(map->name) +
>                                sizeof(".inner") + 1);
> @@ -3845,6 +3846,14 @@ __u32 bpf_map__max_entries(const struct bpf_map *map)
>      return map->def.max_entries;
>  }
>
> +struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
> +{
> +    if (!bpf_map_type__is_map_in_map(map->def.type))
> +        return NULL;
> +
> +    return map->inner_map;
> +}
> +
>  int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
>  {
>      if (map->fd >= 0)
> @@ -9476,6 +9485,7 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
>          pr_warn("error: inner_map_fd already specified\n");
>          return -EINVAL;
>      }
> +    zfree(&map->inner_map);
>      map->inner_map_fd = fd;
>      return 0;
>  }
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index f500621d28e5..bec4e6a6e31d 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -480,6 +480,7 @@ LIBBPF_API int bpf_map__pin(struct bpf_map *map,
> const char *path);
>  LIBBPF_API int bpf_map__unpin(struct bpf_map *map, const char *path);
>
>  LIBBPF_API int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd);
> +LIBBPF_API struct bpf_map *bpf_map__inner_map(struct bpf_map *map);
>
>  LIBBPF_API long libbpf_get_error(const void *ptr);
>
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index f5990f7208ce..eeb6d5ebd1cc 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -360,4 +360,5 @@ LIBBPF_0.4.0 {
>          bpf_linker__free;
>          bpf_linker__new;
>          bpf_object__set_kversion;
> +        bpf_map__inner_map;
>  } LIBBPF_0.3.0;
> diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
> b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
> index d37161e59bb2..cdc9c9b1d0e1 100644
> --- a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
> +++ b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
> @@ -41,13 +41,23 @@ static int process_sample(void *ctx, void *data, size_t len)
>  void test_ringbuf_multi(void)
>  {
>      struct test_ringbuf_multi *skel;
> -    struct ring_buffer *ringbuf;
> +    struct ring_buffer *ringbuf = NULL;
>      int err;
>
> -    skel = test_ringbuf_multi__open_and_load();
> +    skel = test_ringbuf_multi__open();
>      if (CHECK(!skel, "skel_open_load", "skeleton open&load failed\n"))
>          return;
>
> +    bpf_map__set_max_entries(skel->maps.ringbuf1, 4096);
> +    bpf_map__set_max_entries(skel->maps.ringbuf2, 4096);
> +    bpf_map__set_max_entries(bpf_map__inner_map(skel->maps.ringbuf_arr), 4096);
> +
> +    err = test_ringbuf_multi__load(skel);
> +    if (!ASSERT_OK(err, "skel_load"))
> +        goto cleanup;
> +
>      /* only trigger BPF program for current process */
>      skel->bss->pid = getpid();
>
> diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> index edf3b6953533..055c10b2ff80 100644
> --- a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> @@ -15,7 +15,6 @@ struct sample {
>
>  struct ringbuf_map {
>      __uint(type, BPF_MAP_TYPE_RINGBUF);
> -    __uint(max_entries, 1 << 12);
>  } ringbuf1 SEC(".maps"),
>    ringbuf2 SEC(".maps");
>


-- 
WBR, Yauheni


  reply	other threads:[~2021-03-31  6:12 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-26 11:46 [PATCH 0/3] bpf/selftests: page size fixes Yauheni Kaliuta
2021-03-26 11:47 ` [PATCH 1/3] selftests/bpf: test_progs/sockopt_sk: pass page size from userspace Yauheni Kaliuta
2021-03-26 11:47   ` [PATCH 2/3] bpf: selftests: test_progs/sockopt_sk: remove version Yauheni Kaliuta
2021-03-26 11:47   ` [PATCH 3/3] selftests/bpf: ringbuf, mmap: bump up page size to 64K Yauheni Kaliuta
2021-03-26 12:21 ` [PATCH 0/3] bpf/selftests: page size fixes Yauheni Kaliuta
2021-03-26 12:24 ` [PATCH v2 0/4] " Yauheni Kaliuta
2021-03-28  5:05   ` Andrii Nakryiko
2021-03-28 17:06     ` Yauheni Kaliuta
2021-03-28 18:30       ` Andrii Nakryiko
2021-03-26 12:24 ` [PATCH v2 1/4] selftests/bpf: test_progs/sockopt_sk: Convert to use BPF skeleton Yauheni Kaliuta
2021-03-26 12:24   ` [PATCH v2 2/4] selftests/bpf: test_progs/sockopt_sk: pass page size from userspace Yauheni Kaliuta
2021-03-28  5:00     ` Andrii Nakryiko
2021-03-26 12:24   ` [PATCH v2 3/4] bpf: selftests: test_progs/sockopt_sk: remove version Yauheni Kaliuta
2021-03-26 12:24   ` [PATCH v2 4/4] selftests/bpf: ringbuf, mmap: bump up page size to 64K Yauheni Kaliuta
2021-03-28  5:03     ` Andrii Nakryiko
2021-03-29 15:19       ` Yauheni Kaliuta
2021-03-31  5:49         ` Andrii Nakryiko
2021-03-31  6:11           ` Yauheni Kaliuta [this message]
2021-03-31  6:25             ` Andrii Nakryiko
2021-03-31 16:43               ` Yauheni Kaliuta
2021-03-28  4:58   ` [PATCH v2 1/4] selftests/bpf: test_progs/sockopt_sk: Convert to use BPF skeleton Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANoWswkx1zNy1fbCkgC6h8f21EPKTg15oezjtLsZ3eN6pEf2Ng@mail.gmail.com \
    --to=yauheni.kaliuta@redhat.com \
    --cc=andrii.nakryiko@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=jolsa@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).