BPF Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default
@ 2020-03-24 23:31 Stanislav Fomichev
  2020-03-24 23:45 ` Andrii Nakryiko
  0 siblings, 1 reply; 5+ messages in thread
From: Stanislav Fomichev @ 2020-03-24 23:31 UTC (permalink / raw)
  To: netdev, bpf; +Cc: davem, ast, daniel, Stanislav Fomichev

For each prog/btf load we allocate and free 16 megs of verifier buffer.
On production systems it doesn't really make sense because the
programs/btf have gone through extensive testing and (mostly) guaranteed
to successfully load.

Let's switch to a much smaller buffer by default (128 bytes, sys_bpf
doesn't accept smaller log buffer) and resize it if the kernel returns
ENOSPC. On the first ENOSPC error we resize the buffer to BPF_LOG_BUF_SIZE
and then, on each subsequent ENOSPC, we keep doubling the buffer.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 tools/lib/bpf/btf.c             | 10 +++++++++-
 tools/lib/bpf/libbpf.c          | 10 ++++++++--
 tools/lib/bpf/libbpf_internal.h |  2 ++
 3 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 3d1c25fc97ae..53c7efc3b347 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -657,13 +657,14 @@ int btf__finalize_data(struct bpf_object *obj, struct btf *btf)
 
 int btf__load(struct btf *btf)
 {
-	__u32 log_buf_size = BPF_LOG_BUF_SIZE;
+	__u32 log_buf_size = BPF_MIN_LOG_BUF_SIZE;
 	char *log_buf = NULL;
 	int err = 0;
 
 	if (btf->fd >= 0)
 		return -EEXIST;
 
+retry_load:
 	log_buf = malloc(log_buf_size);
 	if (!log_buf)
 		return -ENOMEM;
@@ -673,6 +674,13 @@ int btf__load(struct btf *btf)
 	btf->fd = bpf_load_btf(btf->data, btf->data_size,
 			       log_buf, log_buf_size, false);
 	if (btf->fd < 0) {
+		if (errno == ENOSPC) {
+			log_buf_size = max((__u32)BPF_LOG_BUF_SIZE,
+					   log_buf_size << 1);
+			free(log_buf);
+			goto retry_load;
+		}
+
 		err = -errno;
 		pr_warn("Error loading BTF: %s(%d)\n", strerror(errno), errno);
 		if (*log_buf)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 085e41f9b68e..793c81b35ccc 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4855,7 +4855,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 {
 	struct bpf_load_program_attr load_attr;
 	char *cp, errmsg[STRERR_BUFSIZE];
-	int log_buf_size = BPF_LOG_BUF_SIZE;
+	size_t log_buf_size = BPF_MIN_LOG_BUF_SIZE;
 	char *log_buf;
 	int btf_fd, ret;
 
@@ -4911,7 +4911,13 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	}
 
 	if (errno == ENOSPC) {
-		log_buf_size <<= 1;
+		if (errno == ENOSPC) {
+			log_buf_size = max((size_t)BPF_LOG_BUF_SIZE,
+					   log_buf_size << 1);
+			free(log_buf);
+			goto retry_load;
+		}
+
 		free(log_buf);
 		goto retry_load;
 	}
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 8c3afbd97747..2720f3366798 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -23,6 +23,8 @@
 #define BTF_PARAM_ENC(name, type) (name), (type)
 #define BTF_VAR_SECINFO_ENC(type, offset, size) (type), (offset), (size)
 
+#define BPF_MIN_LOG_BUF_SIZE 128
+
 #ifndef min
 # define min(x, y) ((x) < (y) ? (x) : (y))
 #endif
-- 
2.25.1.696.g5e7596f4ac-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default
  2020-03-24 23:31 [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default Stanislav Fomichev
@ 2020-03-24 23:45 ` Andrii Nakryiko
  2020-03-24 23:59   ` Stanislav Fomichev
  0 siblings, 1 reply; 5+ messages in thread
From: Andrii Nakryiko @ 2020-03-24 23:45 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Networking, bpf, David S. Miller, Alexei Starovoitov, Daniel Borkmann

On Tue, Mar 24, 2020 at 4:31 PM Stanislav Fomichev <sdf@google.com> wrote:
>
> For each prog/btf load we allocate and free 16 megs of verifier buffer.
> On production systems it doesn't really make sense because the
> programs/btf have gone through extensive testing and (mostly) guaranteed
> to successfully load.
>
> Let's switch to a much smaller buffer by default (128 bytes, sys_bpf
> doesn't accept smaller log buffer) and resize it if the kernel returns
> ENOSPC. On the first ENOSPC error we resize the buffer to BPF_LOG_BUF_SIZE
> and then, on each subsequent ENOSPC, we keep doubling the buffer.
>
> Signed-off-by: Stanislav Fomichev <sdf@google.com>
> ---
>  tools/lib/bpf/btf.c             | 10 +++++++++-
>  tools/lib/bpf/libbpf.c          | 10 ++++++++--
>  tools/lib/bpf/libbpf_internal.h |  2 ++
>  3 files changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 3d1c25fc97ae..53c7efc3b347 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -657,13 +657,14 @@ int btf__finalize_data(struct bpf_object *obj, struct btf *btf)
>
>  int btf__load(struct btf *btf)
>  {
> -       __u32 log_buf_size = BPF_LOG_BUF_SIZE;
> +       __u32 log_buf_size = BPF_MIN_LOG_BUF_SIZE;
>         char *log_buf = NULL;
>         int err = 0;
>
>         if (btf->fd >= 0)
>                 return -EEXIST;
>
> +retry_load:
>         log_buf = malloc(log_buf_size);
>         if (!log_buf)
>                 return -ENOMEM;

I'd argue that on first try we shouldn't allocate log_buf at all, then
start allocating it using reasonable starting size (see below).

> @@ -673,6 +674,13 @@ int btf__load(struct btf *btf)
>         btf->fd = bpf_load_btf(btf->data, btf->data_size,
>                                log_buf, log_buf_size, false);
>         if (btf->fd < 0) {
> +               if (errno == ENOSPC) {
> +                       log_buf_size = max((__u32)BPF_LOG_BUF_SIZE,
> +                                          log_buf_size << 1);
> +                       free(log_buf);
> +                       goto retry_load;
> +               }
> +
>                 err = -errno;
>                 pr_warn("Error loading BTF: %s(%d)\n", strerror(errno), errno);
>                 if (*log_buf)
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 085e41f9b68e..793c81b35ccc 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -4855,7 +4855,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
>  {
>         struct bpf_load_program_attr load_attr;
>         char *cp, errmsg[STRERR_BUFSIZE];
> -       int log_buf_size = BPF_LOG_BUF_SIZE;
> +       size_t log_buf_size = BPF_MIN_LOG_BUF_SIZE;
>         char *log_buf;
>         int btf_fd, ret;
>
> @@ -4911,7 +4911,13 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
>         }
>
>         if (errno == ENOSPC) {

same, doing if (!log_buf || errno == ENOSPC) should handle this
without any other major changes?

> -               log_buf_size <<= 1;
> +               if (errno == ENOSPC) {
> +                       log_buf_size = max((size_t)BPF_LOG_BUF_SIZE,
> +                                          log_buf_size << 1);
> +                       free(log_buf);
> +                       goto retry_load;
> +               }
> +
>                 free(log_buf);
>                 goto retry_load;
>         }
> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> index 8c3afbd97747..2720f3366798 100644
> --- a/tools/lib/bpf/libbpf_internal.h
> +++ b/tools/lib/bpf/libbpf_internal.h
> @@ -23,6 +23,8 @@
>  #define BTF_PARAM_ENC(name, type) (name), (type)
>  #define BTF_VAR_SECINFO_ENC(type, offset, size) (type), (offset), (size)
>
> +#define BPF_MIN_LOG_BUF_SIZE 128

This seems way too low, if there is some error it almost certainly
will be too short, probably for few iterations, just causing waste.
Let's make it something a bit more reasonable, like 32KB or something?

> +
>  #ifndef min
>  # define min(x, y) ((x) < (y) ? (x) : (y))
>  #endif
> --
> 2.25.1.696.g5e7596f4ac-goog
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default
  2020-03-24 23:45 ` Andrii Nakryiko
@ 2020-03-24 23:59   ` Stanislav Fomichev
  2020-03-25  0:08     ` Andrii Nakryiko
  0 siblings, 1 reply; 5+ messages in thread
From: Stanislav Fomichev @ 2020-03-24 23:59 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Stanislav Fomichev, Networking, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On 03/24, Andrii Nakryiko wrote:
> On Tue, Mar 24, 2020 at 4:31 PM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > For each prog/btf load we allocate and free 16 megs of verifier buffer.
> > On production systems it doesn't really make sense because the
> > programs/btf have gone through extensive testing and (mostly) guaranteed
> > to successfully load.
> >
> > Let's switch to a much smaller buffer by default (128 bytes, sys_bpf
> > doesn't accept smaller log buffer) and resize it if the kernel returns
> > ENOSPC. On the first ENOSPC error we resize the buffer to BPF_LOG_BUF_SIZE
> > and then, on each subsequent ENOSPC, we keep doubling the buffer.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > ---
> >  tools/lib/bpf/btf.c             | 10 +++++++++-
> >  tools/lib/bpf/libbpf.c          | 10 ++++++++--
> >  tools/lib/bpf/libbpf_internal.h |  2 ++
> >  3 files changed, 19 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > index 3d1c25fc97ae..53c7efc3b347 100644
> > --- a/tools/lib/bpf/btf.c
> > +++ b/tools/lib/bpf/btf.c
> > @@ -657,13 +657,14 @@ int btf__finalize_data(struct bpf_object *obj, struct btf *btf)
> >
> >  int btf__load(struct btf *btf)
> >  {
> > -       __u32 log_buf_size = BPF_LOG_BUF_SIZE;
> > +       __u32 log_buf_size = BPF_MIN_LOG_BUF_SIZE;
> >         char *log_buf = NULL;
> >         int err = 0;
> >
> >         if (btf->fd >= 0)
> >                 return -EEXIST;
> >
> > +retry_load:
> >         log_buf = malloc(log_buf_size);
> >         if (!log_buf)
> >                 return -ENOMEM;
> 
> I'd argue that on first try we shouldn't allocate log_buf at all, then
> start allocating it using reasonable starting size (see below).
Agreed, makes sense.

> > @@ -673,6 +674,13 @@ int btf__load(struct btf *btf)
> >         btf->fd = bpf_load_btf(btf->data, btf->data_size,
> >                                log_buf, log_buf_size, false);
> >         if (btf->fd < 0) {
> > +               if (errno == ENOSPC) {
> > +                       log_buf_size = max((__u32)BPF_LOG_BUF_SIZE,
> > +                                          log_buf_size << 1);
> > +                       free(log_buf);
> > +                       goto retry_load;
> > +               }
> > +
> >                 err = -errno;
> >                 pr_warn("Error loading BTF: %s(%d)\n", strerror(errno), errno);
> >                 if (*log_buf)
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 085e41f9b68e..793c81b35ccc 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -4855,7 +4855,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
> >  {
> >         struct bpf_load_program_attr load_attr;
> >         char *cp, errmsg[STRERR_BUFSIZE];
> > -       int log_buf_size = BPF_LOG_BUF_SIZE;
> > +       size_t log_buf_size = BPF_MIN_LOG_BUF_SIZE;
> >         char *log_buf;
> >         int btf_fd, ret;
> >
> > @@ -4911,7 +4911,13 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
> >         }
> >
> >         if (errno == ENOSPC) {
> 
> same, doing if (!log_buf || errno == ENOSPC) should handle this
> without any other major changes?
Yeah, I don't see why it shouldn't. Let me try to see if I hit something
in the selftests with that approach.

> > -               log_buf_size <<= 1;
> > +               if (errno == ENOSPC) {
> > +                       log_buf_size = max((size_t)BPF_LOG_BUF_SIZE,
> > +                                          log_buf_size << 1);
> > +                       free(log_buf);
> > +                       goto retry_load;
> > +               }
> > +
> >                 free(log_buf);
> >                 goto retry_load;
> >         }
> > diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> > index 8c3afbd97747..2720f3366798 100644
> > --- a/tools/lib/bpf/libbpf_internal.h
> > +++ b/tools/lib/bpf/libbpf_internal.h
> > @@ -23,6 +23,8 @@
> >  #define BTF_PARAM_ENC(name, type) (name), (type)
> >  #define BTF_VAR_SECINFO_ENC(type, offset, size) (type), (offset), (size)
> >
> > +#define BPF_MIN_LOG_BUF_SIZE 128
> 
> This seems way too low, if there is some error it almost certainly
> will be too short, probably for few iterations, just causing waste.
> Let's make it something a bit more reasonable, like 32KB or something?
In this case, maybe start with the existing 16M BPF_LOG_BUF_SIZE?
My goal here is optimize for the successful case. If there is an error the
size shouldn't matter that much.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default
  2020-03-24 23:59   ` Stanislav Fomichev
@ 2020-03-25  0:08     ` Andrii Nakryiko
  2020-03-25  9:54       ` Daniel Borkmann
  0 siblings, 1 reply; 5+ messages in thread
From: Andrii Nakryiko @ 2020-03-25  0:08 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Stanislav Fomichev, Networking, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On Tue, Mar 24, 2020 at 4:59 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 03/24, Andrii Nakryiko wrote:
> > On Tue, Mar 24, 2020 at 4:31 PM Stanislav Fomichev <sdf@google.com> wrote:
> > >
> > > For each prog/btf load we allocate and free 16 megs of verifier buffer.
> > > On production systems it doesn't really make sense because the
> > > programs/btf have gone through extensive testing and (mostly) guaranteed
> > > to successfully load.
> > >
> > > Let's switch to a much smaller buffer by default (128 bytes, sys_bpf
> > > doesn't accept smaller log buffer) and resize it if the kernel returns
> > > ENOSPC. On the first ENOSPC error we resize the buffer to BPF_LOG_BUF_SIZE
> > > and then, on each subsequent ENOSPC, we keep doubling the buffer.
> > >
> > > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > > ---
> > >  tools/lib/bpf/btf.c             | 10 +++++++++-
> > >  tools/lib/bpf/libbpf.c          | 10 ++++++++--
> > >  tools/lib/bpf/libbpf_internal.h |  2 ++
> > >  3 files changed, 19 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > > index 3d1c25fc97ae..53c7efc3b347 100644
> > > --- a/tools/lib/bpf/btf.c
> > > +++ b/tools/lib/bpf/btf.c
> > > @@ -657,13 +657,14 @@ int btf__finalize_data(struct bpf_object *obj, struct btf *btf)
> > >
> > >  int btf__load(struct btf *btf)
> > >  {
> > > -       __u32 log_buf_size = BPF_LOG_BUF_SIZE;
> > > +       __u32 log_buf_size = BPF_MIN_LOG_BUF_SIZE;
> > >         char *log_buf = NULL;
> > >         int err = 0;
> > >
> > >         if (btf->fd >= 0)
> > >                 return -EEXIST;
> > >
> > > +retry_load:
> > >         log_buf = malloc(log_buf_size);
> > >         if (!log_buf)
> > >                 return -ENOMEM;
> >
> > I'd argue that on first try we shouldn't allocate log_buf at all, then
> > start allocating it using reasonable starting size (see below).
> Agreed, makes sense.
>
> > > @@ -673,6 +674,13 @@ int btf__load(struct btf *btf)
> > >         btf->fd = bpf_load_btf(btf->data, btf->data_size,
> > >                                log_buf, log_buf_size, false);
> > >         if (btf->fd < 0) {
> > > +               if (errno == ENOSPC) {
> > > +                       log_buf_size = max((__u32)BPF_LOG_BUF_SIZE,
> > > +                                          log_buf_size << 1);
> > > +                       free(log_buf);
> > > +                       goto retry_load;
> > > +               }
> > > +
> > >                 err = -errno;
> > >                 pr_warn("Error loading BTF: %s(%d)\n", strerror(errno), errno);
> > >                 if (*log_buf)
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index 085e41f9b68e..793c81b35ccc 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -4855,7 +4855,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
> > >  {
> > >         struct bpf_load_program_attr load_attr;
> > >         char *cp, errmsg[STRERR_BUFSIZE];
> > > -       int log_buf_size = BPF_LOG_BUF_SIZE;
> > > +       size_t log_buf_size = BPF_MIN_LOG_BUF_SIZE;
> > >         char *log_buf;
> > >         int btf_fd, ret;
> > >
> > > @@ -4911,7 +4911,13 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
> > >         }
> > >
> > >         if (errno == ENOSPC) {
> >
> > same, doing if (!log_buf || errno == ENOSPC) should handle this
> > without any other major changes?
> Yeah, I don't see why it shouldn't. Let me try to see if I hit something
> in the selftests with that approach.
>
> > > -               log_buf_size <<= 1;
> > > +               if (errno == ENOSPC) {
> > > +                       log_buf_size = max((size_t)BPF_LOG_BUF_SIZE,
> > > +                                          log_buf_size << 1);
> > > +                       free(log_buf);
> > > +                       goto retry_load;
> > > +               }
> > > +
> > >                 free(log_buf);
> > >                 goto retry_load;
> > >         }
> > > diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> > > index 8c3afbd97747..2720f3366798 100644
> > > --- a/tools/lib/bpf/libbpf_internal.h
> > > +++ b/tools/lib/bpf/libbpf_internal.h
> > > @@ -23,6 +23,8 @@
> > >  #define BTF_PARAM_ENC(name, type) (name), (type)
> > >  #define BTF_VAR_SECINFO_ENC(type, offset, size) (type), (offset), (size)
> > >
> > > +#define BPF_MIN_LOG_BUF_SIZE 128
> >
> > This seems way too low, if there is some error it almost certainly
> > will be too short, probably for few iterations, just causing waste.
> > Let's make it something a bit more reasonable, like 32KB or something?
> In this case, maybe start with the existing 16M BPF_LOG_BUF_SIZE?
> My goal here is optimize for the successful case. If there is an error the
> size shouldn't matter that much.

Not feeling strongly. But we already will have a retry loop, so not
too hard to do it in steps. Then also errors do happen in production
as well, and it would be good to not eat too much memory
unnecessarily.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default
  2020-03-25  0:08     ` Andrii Nakryiko
@ 2020-03-25  9:54       ` Daniel Borkmann
  0 siblings, 0 replies; 5+ messages in thread
From: Daniel Borkmann @ 2020-03-25  9:54 UTC (permalink / raw)
  To: Andrii Nakryiko, Stanislav Fomichev
  Cc: Stanislav Fomichev, Networking, bpf, David S. Miller, Alexei Starovoitov

On 3/25/20 1:08 AM, Andrii Nakryiko wrote:
> On Tue, Mar 24, 2020 at 4:59 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>> On 03/24, Andrii Nakryiko wrote:
>>> On Tue, Mar 24, 2020 at 4:31 PM Stanislav Fomichev <sdf@google.com> wrote:
>>>>
>>>> For each prog/btf load we allocate and free 16 megs of verifier buffer.
>>>> On production systems it doesn't really make sense because the
>>>> programs/btf have gone through extensive testing and (mostly) guaranteed
>>>> to successfully load.
>>>>
>>>> Let's switch to a much smaller buffer by default (128 bytes, sys_bpf
>>>> doesn't accept smaller log buffer) and resize it if the kernel returns
>>>> ENOSPC. On the first ENOSPC error we resize the buffer to BPF_LOG_BUF_SIZE
>>>> and then, on each subsequent ENOSPC, we keep doubling the buffer.
>>>>
>>>> Signed-off-by: Stanislav Fomichev <sdf@google.com>
>>>> ---
>>>>   tools/lib/bpf/btf.c             | 10 +++++++++-
>>>>   tools/lib/bpf/libbpf.c          | 10 ++++++++--
>>>>   tools/lib/bpf/libbpf_internal.h |  2 ++
>>>>   3 files changed, 19 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
>>>> index 3d1c25fc97ae..53c7efc3b347 100644
>>>> --- a/tools/lib/bpf/btf.c
>>>> +++ b/tools/lib/bpf/btf.c
>>>> @@ -657,13 +657,14 @@ int btf__finalize_data(struct bpf_object *obj, struct btf *btf)
>>>>
>>>>   int btf__load(struct btf *btf)
>>>>   {
>>>> -       __u32 log_buf_size = BPF_LOG_BUF_SIZE;
>>>> +       __u32 log_buf_size = BPF_MIN_LOG_BUF_SIZE;
>>>>          char *log_buf = NULL;
>>>>          int err = 0;
>>>>
>>>>          if (btf->fd >= 0)
>>>>                  return -EEXIST;
>>>>
>>>> +retry_load:
>>>>          log_buf = malloc(log_buf_size);
>>>>          if (!log_buf)
>>>>                  return -ENOMEM;
>>>
>>> I'd argue that on first try we shouldn't allocate log_buf at all, then
>>> start allocating it using reasonable starting size (see below).
>> Agreed, makes sense.

The iproute2 BPF loader does the first try without any log buffer, and then
successively increases the size on failure [0].

libbpf should also assume the success case in the very first run, and only
then redo the load attempt with log buffer to avoid the overhead.

   [0] https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/tree/lib/bpf.c

>>>> @@ -673,6 +674,13 @@ int btf__load(struct btf *btf)
>>>>          btf->fd = bpf_load_btf(btf->data, btf->data_size,
>>>>                                 log_buf, log_buf_size, false);
>>>>          if (btf->fd < 0) {
>>>> +               if (errno == ENOSPC) {
>>>> +                       log_buf_size = max((__u32)BPF_LOG_BUF_SIZE,
>>>> +                                          log_buf_size << 1);
>>>> +                       free(log_buf);
>>>> +                       goto retry_load;
>>>> +               }
>>>> +
>>>>                  err = -errno;
>>>>                  pr_warn("Error loading BTF: %s(%d)\n", strerror(errno), errno);
>>>>                  if (*log_buf)
[...]
>>>> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
>>>> index 8c3afbd97747..2720f3366798 100644
>>>> --- a/tools/lib/bpf/libbpf_internal.h
>>>> +++ b/tools/lib/bpf/libbpf_internal.h
>>>> @@ -23,6 +23,8 @@
>>>>   #define BTF_PARAM_ENC(name, type) (name), (type)
>>>>   #define BTF_VAR_SECINFO_ENC(type, offset, size) (type), (offset), (size)
>>>>
>>>> +#define BPF_MIN_LOG_BUF_SIZE 128
>>>
>>> This seems way too low, if there is some error it almost certainly
>>> will be too short, probably for few iterations, just causing waste.
>>> Let's make it something a bit more reasonable, like 32KB or something?
>> In this case, maybe start with the existing 16M BPF_LOG_BUF_SIZE?
>> My goal here is optimize for the successful case. If there is an error the
>> size shouldn't matter that much.
> 
> Not feeling strongly. But we already will have a retry loop, so not
> too hard to do it in steps. Then also errors do happen in production
> as well, and it would be good to not eat too much memory
> unnecessarily.

Yeah, 128 is way too low. Either we should just malloc the max possible size
right away, or make it a two-step approach where we first try with half the
size and only if that fails we retry with the max size. Given programs can be
very complex this will otherwise just prolong the time unnecessarily for the
failure verdict and unnecessarily puts load on the verifier.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, back to index

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-24 23:31 [PATCH bpf-next] libbpf: don't allocate 16M for log buffer by default Stanislav Fomichev
2020-03-24 23:45 ` Andrii Nakryiko
2020-03-24 23:59   ` Stanislav Fomichev
2020-03-25  0:08     ` Andrii Nakryiko
2020-03-25  9:54       ` Daniel Borkmann

BPF Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/bpf/0 bpf/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 bpf bpf/ https://lore.kernel.org/bpf \
		bpf@vger.kernel.org
	public-inbox-index bpf

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.bpf


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git