All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers
@ 2021-05-11  8:10 Florent Revest
  2021-05-11 21:07 ` Alexei Starovoitov
  0 siblings, 1 reply; 3+ messages in thread
From: Florent Revest @ 2021-05-11  8:10 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, andrii, kpsingh, jackmanb, sdf, linux-kernel,
	Florent Revest, syzbot+63122d0bc347f18c1884

The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one
per-cpu buffer that they use to store temporary data (arguments to
bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it
by the end of their scope with bpf_bprintf_cleanup.

If one of these helpers gets called within the scope of one of these
helpers, for example: a first bpf program gets called, uses
bpf_trace_printk which calls raw_spin_lock_irqsave which is traced by
another bpf program that calls bpf_snprintf, then the second "get"
fails. Essentially, these helpers are not re-entrant. They would return
-EBUSY and print a warning message once.

This patch triples the number of bprintf buffers to allow three levels
of nesting. This is very similar to what was done for tracepoints in
"9594dc3c7e7 bpf: fix nested bpf tracepoints with per-cpu data"

Fixes: d9c9e4db186a ("bpf: Factorize bpf_trace_printk and bpf_seq_printf")
Reported-by: syzbot+63122d0bc347f18c1884@syzkaller.appspotmail.com
Signed-off-by: Florent Revest <revest@chromium.org>
---
 kernel/bpf/helpers.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 544773970dbc..ef658a9ea5c9 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -696,34 +696,35 @@ static int bpf_trace_copy_string(char *buf, void *unsafe_ptr, char fmt_ptype,
  */
 #define MAX_PRINTF_BUF_LEN	512
 
-struct bpf_printf_buf {
-	char tmp_buf[MAX_PRINTF_BUF_LEN];
+/* Support executing three nested bprintf helper calls on a given CPU */
+struct bpf_bprintf_buffers {
+	char tmp_bufs[3][MAX_PRINTF_BUF_LEN];
 };
-static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf);
-static DEFINE_PER_CPU(int, bpf_printf_buf_used);
+static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs);
+static DEFINE_PER_CPU(int, bpf_bprintf_nest_level);
 
 static int try_get_fmt_tmp_buf(char **tmp_buf)
 {
-	struct bpf_printf_buf *bufs;
-	int used;
+	struct bpf_bprintf_buffers *bufs;
+	int nest_level;
 
 	preempt_disable();
-	used = this_cpu_inc_return(bpf_printf_buf_used);
-	if (WARN_ON_ONCE(used > 1)) {
-		this_cpu_dec(bpf_printf_buf_used);
+	nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
+	if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) {
+		this_cpu_dec(bpf_bprintf_nest_level);
 		preempt_enable();
 		return -EBUSY;
 	}
-	bufs = this_cpu_ptr(&bpf_printf_buf);
-	*tmp_buf = bufs->tmp_buf;
+	bufs = this_cpu_ptr(&bpf_bprintf_bufs);
+	*tmp_buf = bufs->tmp_bufs[nest_level - 1];
 
 	return 0;
 }
 
 void bpf_bprintf_cleanup(void)
 {
-	if (this_cpu_read(bpf_printf_buf_used)) {
-		this_cpu_dec(bpf_printf_buf_used);
+	if (this_cpu_read(bpf_bprintf_nest_level)) {
+		this_cpu_dec(bpf_bprintf_nest_level);
 		preempt_enable();
 	}
 }
-- 
2.31.1.607.g51e8a6a459-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers
  2021-05-11  8:10 [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers Florent Revest
@ 2021-05-11 21:07 ` Alexei Starovoitov
  2021-05-11 21:12   ` Florent Revest
  0 siblings, 1 reply; 3+ messages in thread
From: Alexei Starovoitov @ 2021-05-11 21:07 UTC (permalink / raw)
  To: Florent Revest
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	KP Singh, Brendan Jackman, Stanislav Fomichev, LKML,
	syzbot+63122d0bc347f18c1884

On Tue, May 11, 2021 at 1:12 AM Florent Revest <revest@chromium.org> wrote:
>
> The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one
> per-cpu buffer that they use to store temporary data (arguments to
> bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it
> by the end of their scope with bpf_bprintf_cleanup.
>
> If one of these helpers gets called within the scope of one of these
> helpers, for example: a first bpf program gets called, uses
> bpf_trace_printk which calls raw_spin_lock_irqsave which is traced by
> another bpf program that calls bpf_snprintf, then the second "get"
> fails. Essentially, these helpers are not re-entrant. They would return
> -EBUSY and print a warning message once.
>
> This patch triples the number of bprintf buffers to allow three levels
> of nesting. This is very similar to what was done for tracepoints in
> "9594dc3c7e7 bpf: fix nested bpf tracepoints with per-cpu data"
>
> Fixes: d9c9e4db186a ("bpf: Factorize bpf_trace_printk and bpf_seq_printf")
> Reported-by: syzbot+63122d0bc347f18c1884@syzkaller.appspotmail.com
> Signed-off-by: Florent Revest <revest@chromium.org>
> ---
>  kernel/bpf/helpers.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 544773970dbc..ef658a9ea5c9 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -696,34 +696,35 @@ static int bpf_trace_copy_string(char *buf, void *unsafe_ptr, char fmt_ptype,
>   */
>  #define MAX_PRINTF_BUF_LEN     512
>
> -struct bpf_printf_buf {
> -       char tmp_buf[MAX_PRINTF_BUF_LEN];
> +/* Support executing three nested bprintf helper calls on a given CPU */
> +struct bpf_bprintf_buffers {
> +       char tmp_bufs[3][MAX_PRINTF_BUF_LEN];
>  };
> -static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf);
> -static DEFINE_PER_CPU(int, bpf_printf_buf_used);
> +static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs);
> +static DEFINE_PER_CPU(int, bpf_bprintf_nest_level);
>
>  static int try_get_fmt_tmp_buf(char **tmp_buf)
>  {
> -       struct bpf_printf_buf *bufs;
> -       int used;
> +       struct bpf_bprintf_buffers *bufs;
> +       int nest_level;
>
>         preempt_disable();
> -       used = this_cpu_inc_return(bpf_printf_buf_used);
> -       if (WARN_ON_ONCE(used > 1)) {
> -               this_cpu_dec(bpf_printf_buf_used);
> +       nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
> +       if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) {
> +               this_cpu_dec(bpf_bprintf_nest_level);

Applied to bpf tree.
I think at the end the fix is simple enough and much better than an
on-stack buffer.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers
  2021-05-11 21:07 ` Alexei Starovoitov
@ 2021-05-11 21:12   ` Florent Revest
  0 siblings, 0 replies; 3+ messages in thread
From: Florent Revest @ 2021-05-11 21:12 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	KP Singh, Brendan Jackman, Stanislav Fomichev, LKML,
	syzbot+63122d0bc347f18c1884

On Tue, May 11, 2021 at 11:07 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, May 11, 2021 at 1:12 AM Florent Revest <revest@chromium.org> wrote:
> >
> > The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one
> > per-cpu buffer that they use to store temporary data (arguments to
> > bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it
> > by the end of their scope with bpf_bprintf_cleanup.
> >
> > If one of these helpers gets called within the scope of one of these
> > helpers, for example: a first bpf program gets called, uses
> > bpf_trace_printk which calls raw_spin_lock_irqsave which is traced by
> > another bpf program that calls bpf_snprintf, then the second "get"
> > fails. Essentially, these helpers are not re-entrant. They would return
> > -EBUSY and print a warning message once.
> >
> > This patch triples the number of bprintf buffers to allow three levels
> > of nesting. This is very similar to what was done for tracepoints in
> > "9594dc3c7e7 bpf: fix nested bpf tracepoints with per-cpu data"
> >
> > Fixes: d9c9e4db186a ("bpf: Factorize bpf_trace_printk and bpf_seq_printf")
> > Reported-by: syzbot+63122d0bc347f18c1884@syzkaller.appspotmail.com
> > Signed-off-by: Florent Revest <revest@chromium.org>
> > ---
> >  kernel/bpf/helpers.c | 27 ++++++++++++++-------------
> >  1 file changed, 14 insertions(+), 13 deletions(-)
> >
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 544773970dbc..ef658a9ea5c9 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -696,34 +696,35 @@ static int bpf_trace_copy_string(char *buf, void *unsafe_ptr, char fmt_ptype,
> >   */
> >  #define MAX_PRINTF_BUF_LEN     512
> >
> > -struct bpf_printf_buf {
> > -       char tmp_buf[MAX_PRINTF_BUF_LEN];
> > +/* Support executing three nested bprintf helper calls on a given CPU */
> > +struct bpf_bprintf_buffers {
> > +       char tmp_bufs[3][MAX_PRINTF_BUF_LEN];
> >  };
> > -static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf);
> > -static DEFINE_PER_CPU(int, bpf_printf_buf_used);
> > +static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs);
> > +static DEFINE_PER_CPU(int, bpf_bprintf_nest_level);
> >
> >  static int try_get_fmt_tmp_buf(char **tmp_buf)
> >  {
> > -       struct bpf_printf_buf *bufs;
> > -       int used;
> > +       struct bpf_bprintf_buffers *bufs;
> > +       int nest_level;
> >
> >         preempt_disable();
> > -       used = this_cpu_inc_return(bpf_printf_buf_used);
> > -       if (WARN_ON_ONCE(used > 1)) {
> > -               this_cpu_dec(bpf_printf_buf_used);
> > +       nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
> > +       if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) {
> > +               this_cpu_dec(bpf_bprintf_nest_level);
>
> Applied to bpf tree.

Thanks Alexei!

> I think at the end the fix is simple enough and much better than an
> on-stack buffer.

Agree. :) I was skeptical at first but this turned out quite well in
the end, thank you for convincing me Daniel & Andrii. ;)

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-05-11 21:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11  8:10 [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers Florent Revest
2021-05-11 21:07 ` Alexei Starovoitov
2021-05-11 21:12   ` Florent Revest

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.