All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: Jiri Olsa <jolsa@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>, Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andriin@fb.com>,
	"Steven Rostedt (VMware)" <rostedt@goodmis.org>,
	Networking <netdev@vger.kernel.org>, bpf <bpf@vger.kernel.org>,
	Martin KaFai Lau <kafai@fb.com>, Song Liu <songliubraving@fb.com>,
	Yonghong Song <yhs@fb.com>,
	John Fastabend <john.fastabend@gmail.com>,
	KP Singh <kpsingh@chromium.org>, Daniel Xu <dxu@dxuuu.xyz>,
	Viktor Malik <vmalik@redhat.com>
Subject: Re: [PATCH 10/19] bpf: Allow to store caller's ip as argument
Date: Tue, 8 Jun 2021 14:02:56 -0700	[thread overview]
Message-ID: <CAEf4Bza2u_bHFkCVj4t0yPsNqBqvVkda8mQ-ff-rcgH1rAvRuw@mail.gmail.com> (raw)
In-Reply-To: <YL/Z+MMB8db5904r@krava>

On Tue, Jun 8, 2021 at 1:58 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Tue, Jun 08, 2021 at 11:49:31AM -0700, Andrii Nakryiko wrote:
> > On Sat, Jun 5, 2021 at 4:12 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > When we will have multiple functions attached to trampoline
> > > we need to propagate the function's address to the bpf program.
> > >
> > > Adding new BPF_TRAMP_F_IP_ARG flag to arch_prepare_bpf_trampoline
> > > function that will store origin caller's address before function's
> > > arguments.
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  arch/x86/net/bpf_jit_comp.c | 18 ++++++++++++++----
> > >  include/linux/bpf.h         |  5 +++++
> > >  2 files changed, 19 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > > index b77e6bd78354..d2425c18272a 100644
> > > --- a/arch/x86/net/bpf_jit_comp.c
> > > +++ b/arch/x86/net/bpf_jit_comp.c
> > > @@ -1951,7 +1951,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> > >                                 void *orig_call)
> > >  {
> > >         int ret, i, cnt = 0, nr_args = m->nr_args;
> > > -       int stack_size = nr_args * 8;
> > > +       int stack_size = nr_args * 8, ip_arg = 0;
> > >         struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
> > >         struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
> > >         struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
> > > @@ -1975,6 +1975,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> > >                  */
> > >                 orig_call += X86_PATCH_SIZE;
> > >
> > > +       if (flags & BPF_TRAMP_F_IP_ARG)
> > > +               stack_size += 8;
> > > +
> >
> > nit: move it a bit up where we adjust stack_size for BPF_TRAMP_F_CALL_ORIG flag?
>
> ok
>
> >
> > >         prog = image;
> > >
> > >         EMIT1(0x55);             /* push rbp */
> > > @@ -1982,7 +1985,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> > >         EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
> > >         EMIT1(0x53);             /* push rbx */
> > >
> > > -       save_regs(m, &prog, nr_args, stack_size);
> > > +       if (flags & BPF_TRAMP_F_IP_ARG) {
> > > +               emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> > > +               EMIT4(0x48, 0x83, 0xe8, X86_PATCH_SIZE); /* sub $X86_PATCH_SIZE,%rax*/
> > > +               emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -stack_size);
> > > +               ip_arg = 8;
> > > +       }
> >
> > why not pass flags into save_regs and let it handle this case without
> > this extra ip_arg adjustment?
> >
> > > +
> > > +       save_regs(m, &prog, nr_args, stack_size - ip_arg);
> > >
> > >         if (flags & BPF_TRAMP_F_CALL_ORIG) {
> > >                 /* arg1: mov rdi, im */
> > > @@ -2011,7 +2021,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> > >         }
> > >
> > >         if (flags & BPF_TRAMP_F_CALL_ORIG) {
> > > -               restore_regs(m, &prog, nr_args, stack_size);
> > > +               restore_regs(m, &prog, nr_args, stack_size - ip_arg);
> > >
> >
> > similarly (and symmetrically), pass flags into restore_regs() to
> > handle that ip_arg transparently?
>
> so you mean something like:
>
>         if (flags & BPF_TRAMP_F_IP_ARG)
>                 stack_size -= 8;
>
> in both save_regs and restore_regs function, right?

yes, but for save_regs it will do more (emit_ldx and stuff)

>
> jirka
>
> >
> > >                 if (flags & BPF_TRAMP_F_ORIG_STACK) {
> > >                         emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> > > @@ -2052,7 +2062,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> > >                 }
> > >
> > >         if (flags & BPF_TRAMP_F_RESTORE_REGS)
> > > -               restore_regs(m, &prog, nr_args, stack_size);
> > > +               restore_regs(m, &prog, nr_args, stack_size - ip_arg);
> > >
> > >         /* This needs to be done regardless. If there were fmod_ret programs,
> > >          * the return value is only updated on the stack and still needs to be
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index 16fc600503fb..6cbf3c81c650 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -559,6 +559,11 @@ struct btf_func_model {
> > >   */
> > >  #define BPF_TRAMP_F_ORIG_STACK         BIT(3)
> > >
> > > +/* First argument is IP address of the caller. Makes sense for fentry/fexit
> > > + * programs only.
> > > + */
> > > +#define BPF_TRAMP_F_IP_ARG             BIT(4)
> > > +
> > >  /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
> > >   * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
> > >   */
> > > --
> > > 2.31.1
> > >
> >
>

  reply	other threads:[~2021-06-08 21:03 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-05 11:10 [RFCv3 00/19] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
2021-06-05 11:10 ` [PATCH 01/19] x86/ftrace: Remove extra orig rax move Jiri Olsa
2021-06-05 11:10 ` [PATCH 02/19] x86/ftrace: Remove fault protection code in prepare_ftrace_return Jiri Olsa
2021-06-05 11:10 ` [PATCH 03/19] x86/ftrace: Make function graph use ftrace directly Jiri Olsa
2021-06-08 18:35   ` Andrii Nakryiko
2021-06-08 18:51     ` Jiri Olsa
2021-06-08 19:11       ` Steven Rostedt
2021-06-05 11:10 ` [PATCH 04/19] tracing: Add trampoline/graph selftest Jiri Olsa
2021-06-05 11:10 ` [PATCH 05/19] ftrace: Add ftrace_add_rec_direct function Jiri Olsa
2021-06-05 11:10 ` [PATCH 06/19] ftrace: Add multi direct register/unregister interface Jiri Olsa
2021-06-05 11:10 ` [PATCH 07/19] ftrace: Add multi direct modify interface Jiri Olsa
2021-06-05 11:10 ` [PATCH 08/19] ftrace/samples: Add multi direct interface test module Jiri Olsa
2021-06-05 11:10 ` [PATCH 09/19] bpf, x64: Allow to use caller address from stack Jiri Olsa
2021-06-07  3:07   ` Yonghong Song
2021-06-07 18:13     ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 10/19] bpf: Allow to store caller's ip as argument Jiri Olsa
2021-06-07  3:21   ` Yonghong Song
2021-06-07 18:15     ` Jiri Olsa
2021-06-08 18:49   ` Andrii Nakryiko
2021-06-08 20:58     ` Jiri Olsa
2021-06-08 21:02       ` Andrii Nakryiko [this message]
2021-06-08 21:11         ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 11/19] bpf: Add support to load multi func tracing program Jiri Olsa
2021-06-07  3:56   ` Yonghong Song
2021-06-07 18:18     ` Jiri Olsa
2021-06-07 19:35       ` Yonghong Song
2021-06-05 11:10 ` [PATCH 12/19] bpf: Add bpf_trampoline_alloc function Jiri Olsa
2021-06-05 11:10 ` [PATCH 13/19] bpf: Add support to link multi func tracing program Jiri Olsa
2021-06-07  5:36   ` Yonghong Song
2021-06-07 18:25     ` Jiri Olsa
2021-06-07 19:39       ` Yonghong Song
2021-06-08 15:42   ` Alexei Starovoitov
2021-06-08 18:17     ` Jiri Olsa
2021-06-08 18:49       ` Alexei Starovoitov
2021-06-08 21:07         ` Jiri Olsa
2021-06-08 23:05           ` Alexei Starovoitov
2021-06-09  5:08             ` Andrii Nakryiko
2021-06-09 13:42               ` Jiri Olsa
2021-06-09 13:33             ` Jiri Olsa
2021-06-09  5:18   ` Andrii Nakryiko
2021-06-09 13:53     ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 14/19] libbpf: Add btf__find_by_pattern_kind function Jiri Olsa
2021-06-09  5:29   ` Andrii Nakryiko
2021-06-09 13:59     ` Jiri Olsa
2021-06-09 14:19       ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 15/19] libbpf: Add support to link multi func tracing program Jiri Olsa
2021-06-07  5:49   ` Yonghong Song
2021-06-07 18:28     ` Jiri Olsa
2021-06-07 19:42       ` Yonghong Song
2021-06-07 20:11         ` Jiri Olsa
2021-06-09  5:34   ` Andrii Nakryiko
2021-06-09 14:17     ` Jiri Olsa
2021-06-10 17:05       ` Andrii Nakryiko
2021-06-10 20:35         ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 16/19] selftests/bpf: Add fentry multi func test Jiri Olsa
2021-06-07  6:06   ` Yonghong Song
2021-06-07 18:42     ` Jiri Olsa
2021-06-09  5:40   ` Andrii Nakryiko
2021-06-09 14:29     ` Jiri Olsa
2021-06-10 17:00       ` Andrii Nakryiko
2021-06-10 20:28         ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 17/19] selftests/bpf: Add fexit " Jiri Olsa
2021-06-05 11:10 ` [PATCH 18/19] selftests/bpf: Add fentry/fexit " Jiri Olsa
2021-06-09  5:41   ` Andrii Nakryiko
2021-06-09 14:29     ` Jiri Olsa
2021-06-05 11:10 ` [PATCH 19/19] selftests/bpf: Temporary fix for fentry_fexit_multi_test Jiri Olsa
2021-06-17 20:29 ` [RFCv3 00/19] x86/ftrace/bpf: Add batch support for direct/tracing attach Andrii Nakryiko
2021-06-19  8:33   ` Jiri Olsa
2021-06-19 16:19     ` Yonghong Song
2021-06-19 17:09       ` Jiri Olsa
2021-06-20 16:56         ` Yonghong Song
2021-06-20 17:47           ` Alexei Starovoitov
2021-06-21  6:46             ` Andrii Nakryiko
2021-06-21  6:50     ` Andrii Nakryiko
2021-07-06 20:26       ` Andrii Nakryiko
2021-07-07 15:19         ` Jiri Olsa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEf4Bza2u_bHFkCVj4t0yPsNqBqvVkda8mQ-ff-rcgH1rAvRuw@mail.gmail.com \
    --to=andrii.nakryiko@gmail.com \
    --cc=andriin@fb.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dxu@dxuuu.xyz \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=jolsa@redhat.com \
    --cc=kafai@fb.com \
    --cc=kpsingh@chromium.org \
    --cc=netdev@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=songliubraving@fb.com \
    --cc=vmalik@redhat.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.