* [PATCH v5 bpf-next 01/11] bpf: Support ->fill_link_info for kprobe_multi
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 21:45 ` Andrii Nakryiko
2023-06-23 14:15 ` [PATCH v5 bpf-next 02/11] bpftool: Dump the kernel symbol's module name Yafang Shao
` (9 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
With the addition of support for fill_link_info to the kprobe_multi link,
users will gain the ability to inspect it conveniently using the
`bpftool link show`. This enhancement provides valuable information to the
user, including the count of probed functions and their respective
addresses. It's important to note that if the kptr_restrict setting is not
permitted, the probed address will not be exposed, ensuring security.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
include/uapi/linux/bpf.h | 5 +++++
kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 5 +++++
3 files changed, 38 insertions(+)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index a7b5e91..23691ea 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -6438,6 +6438,11 @@ struct bpf_link_info {
__s32 priority;
__u32 flags;
} netfilter;
+ struct {
+ __aligned_u64 addrs; /* in/out: addresses buffer ptr */
+ __u32 count;
+ __u32 flags;
+ } kprobe_multi;
};
} __attribute__((aligned(8)));
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 2bc41e6..2123197b 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2459,6 +2459,7 @@ struct bpf_kprobe_multi_link {
u32 cnt;
u32 mods_cnt;
struct module **mods;
+ u32 flags;
};
struct bpf_kprobe_multi_run_ctx {
@@ -2548,9 +2549,35 @@ static void bpf_kprobe_multi_link_dealloc(struct bpf_link *link)
kfree(kmulti_link);
}
+static int bpf_kprobe_multi_link_fill_link_info(const struct bpf_link *link,
+ struct bpf_link_info *info)
+{
+ u64 __user *uaddrs = u64_to_user_ptr(info->kprobe_multi.addrs);
+ struct bpf_kprobe_multi_link *kmulti_link;
+ u32 ucount = info->kprobe_multi.count;
+
+ if (!uaddrs ^ !ucount)
+ return -EINVAL;
+
+ kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
+ info->kprobe_multi.count = kmulti_link->cnt;
+ info->kprobe_multi.flags = kmulti_link->flags;
+
+ if (!uaddrs)
+ return 0;
+ if (ucount < kmulti_link->cnt)
+ return -EINVAL;
+ if (!kallsyms_show_value(current_cred()))
+ return 0;
+ if (copy_to_user(uaddrs, kmulti_link->addrs, ucount * sizeof(u64)))
+ return -EFAULT;
+ return 0;
+}
+
static const struct bpf_link_ops bpf_kprobe_multi_link_lops = {
.release = bpf_kprobe_multi_link_release,
.dealloc = bpf_kprobe_multi_link_dealloc,
+ .fill_link_info = bpf_kprobe_multi_link_fill_link_info,
};
static void bpf_kprobe_multi_cookie_swap(void *a, void *b, int size, const void *priv)
@@ -2862,6 +2889,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
link->addrs = addrs;
link->cookies = cookies;
link->cnt = cnt;
+ link->flags = flags;
if (cookies) {
/*
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index a7b5e91..23691ea 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -6438,6 +6438,11 @@ struct bpf_link_info {
__s32 priority;
__u32 flags;
} netfilter;
+ struct {
+ __aligned_u64 addrs; /* in/out: addresses buffer ptr */
+ __u32 count;
+ __u32 flags;
+ } kprobe_multi;
};
} __attribute__((aligned(8)));
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 01/11] bpf: Support ->fill_link_info for kprobe_multi
2023-06-23 14:15 ` [PATCH v5 bpf-next 01/11] bpf: Support ->fill_link_info for kprobe_multi Yafang Shao
@ 2023-06-23 21:45 ` Andrii Nakryiko
2023-06-25 14:34 ` Yafang Shao
0 siblings, 1 reply; 25+ messages in thread
From: Andrii Nakryiko @ 2023-06-23 21:45 UTC (permalink / raw)
To: Yafang Shao
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat, bpf,
linux-trace-kernel
On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> With the addition of support for fill_link_info to the kprobe_multi link,
> users will gain the ability to inspect it conveniently using the
> `bpftool link show`. This enhancement provides valuable information to the
> user, including the count of probed functions and their respective
> addresses. It's important to note that if the kptr_restrict setting is not
> permitted, the probed address will not be exposed, ensuring security.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
> include/uapi/linux/bpf.h | 5 +++++
> kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 5 +++++
> 3 files changed, 38 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index a7b5e91..23691ea 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -6438,6 +6438,11 @@ struct bpf_link_info {
> __s32 priority;
> __u32 flags;
> } netfilter;
> + struct {
> + __aligned_u64 addrs; /* in/out: addresses buffer ptr */
> + __u32 count;
> + __u32 flags;
> + } kprobe_multi;
> };
> } __attribute__((aligned(8)));
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 2bc41e6..2123197b 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2459,6 +2459,7 @@ struct bpf_kprobe_multi_link {
> u32 cnt;
> u32 mods_cnt;
> struct module **mods;
> + u32 flags;
> };
>
> struct bpf_kprobe_multi_run_ctx {
> @@ -2548,9 +2549,35 @@ static void bpf_kprobe_multi_link_dealloc(struct bpf_link *link)
> kfree(kmulti_link);
> }
>
> +static int bpf_kprobe_multi_link_fill_link_info(const struct bpf_link *link,
> + struct bpf_link_info *info)
> +{
> + u64 __user *uaddrs = u64_to_user_ptr(info->kprobe_multi.addrs);
> + struct bpf_kprobe_multi_link *kmulti_link;
> + u32 ucount = info->kprobe_multi.count;
> +
> + if (!uaddrs ^ !ucount)
> + return -EINVAL;
> +
> + kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
> + info->kprobe_multi.count = kmulti_link->cnt;
> + info->kprobe_multi.flags = kmulti_link->flags;
> +
> + if (!uaddrs)
> + return 0;
> + if (ucount < kmulti_link->cnt)
> + return -EINVAL;
it would be probably sane behavior to copy ucount items and return -E2BIG
> + if (!kallsyms_show_value(current_cred()))
> + return 0;
at least we should zero out kmulti_link->cnt elements. Otherwise it's
hard for user-space know whether returned data is garbage or not?
> + if (copy_to_user(uaddrs, kmulti_link->addrs, ucount * sizeof(u64)))
s/ucount/kmulti_link->cnt/ ?
> + return -EFAULT;
> + return 0;
> +}
> +
> static const struct bpf_link_ops bpf_kprobe_multi_link_lops = {
> .release = bpf_kprobe_multi_link_release,
> .dealloc = bpf_kprobe_multi_link_dealloc,
> + .fill_link_info = bpf_kprobe_multi_link_fill_link_info,
> };
>
> static void bpf_kprobe_multi_cookie_swap(void *a, void *b, int size, const void *priv)
> @@ -2862,6 +2889,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
> link->addrs = addrs;
> link->cookies = cookies;
> link->cnt = cnt;
> + link->flags = flags;
>
> if (cookies) {
> /*
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index a7b5e91..23691ea 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -6438,6 +6438,11 @@ struct bpf_link_info {
> __s32 priority;
> __u32 flags;
> } netfilter;
> + struct {
> + __aligned_u64 addrs; /* in/out: addresses buffer ptr */
> + __u32 count;
> + __u32 flags;
> + } kprobe_multi;
> };
> } __attribute__((aligned(8)));
>
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 01/11] bpf: Support ->fill_link_info for kprobe_multi
2023-06-23 21:45 ` Andrii Nakryiko
@ 2023-06-25 14:34 ` Yafang Shao
0 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-25 14:34 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat, bpf,
linux-trace-kernel
On Sat, Jun 24, 2023 at 5:45 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> >
> > With the addition of support for fill_link_info to the kprobe_multi link,
> > users will gain the ability to inspect it conveniently using the
> > `bpftool link show`. This enhancement provides valuable information to the
> > user, including the count of probed functions and their respective
> > addresses. It's important to note that if the kptr_restrict setting is not
> > permitted, the probed address will not be exposed, ensuring security.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> > include/uapi/linux/bpf.h | 5 +++++
> > kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++
> > tools/include/uapi/linux/bpf.h | 5 +++++
> > 3 files changed, 38 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index a7b5e91..23691ea 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -6438,6 +6438,11 @@ struct bpf_link_info {
> > __s32 priority;
> > __u32 flags;
> > } netfilter;
> > + struct {
> > + __aligned_u64 addrs; /* in/out: addresses buffer ptr */
> > + __u32 count;
> > + __u32 flags;
> > + } kprobe_multi;
> > };
> > } __attribute__((aligned(8)));
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 2bc41e6..2123197b 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -2459,6 +2459,7 @@ struct bpf_kprobe_multi_link {
> > u32 cnt;
> > u32 mods_cnt;
> > struct module **mods;
> > + u32 flags;
> > };
> >
> > struct bpf_kprobe_multi_run_ctx {
> > @@ -2548,9 +2549,35 @@ static void bpf_kprobe_multi_link_dealloc(struct bpf_link *link)
> > kfree(kmulti_link);
> > }
> >
> > +static int bpf_kprobe_multi_link_fill_link_info(const struct bpf_link *link,
> > + struct bpf_link_info *info)
> > +{
> > + u64 __user *uaddrs = u64_to_user_ptr(info->kprobe_multi.addrs);
> > + struct bpf_kprobe_multi_link *kmulti_link;
> > + u32 ucount = info->kprobe_multi.count;
> > +
> > + if (!uaddrs ^ !ucount)
> > + return -EINVAL;
> > +
> > + kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
> > + info->kprobe_multi.count = kmulti_link->cnt;
> > + info->kprobe_multi.flags = kmulti_link->flags;
> > +
> > + if (!uaddrs)
> > + return 0;
> > + if (ucount < kmulti_link->cnt)
> > + return -EINVAL;
>
> it would be probably sane behavior to copy ucount items and return -E2BIG
Agree.
>
> > + if (!kallsyms_show_value(current_cred()))
> > + return 0;
>
> at least we should zero out kmulti_link->cnt elements. Otherwise it's
> hard for user-space know whether returned data is garbage or not?
Agree. Should clear it.
>
>
> > + if (copy_to_user(uaddrs, kmulti_link->addrs, ucount * sizeof(u64)))
>
> s/ucount/kmulti_link->cnt/ ?
Yes. Thanks for pointing it out.
--
Regards
Yafang
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 02/11] bpftool: Dump the kernel symbol's module name
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 01/11] bpf: Support ->fill_link_info for kprobe_multi Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 16:48 ` Quentin Monnet
2023-06-23 14:15 ` [PATCH v5 bpf-next 03/11] bpftool: Show kprobe_multi link info Yafang Shao
` (8 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
If the kernel symbol is in a module, we will dump the module name as
well. The square brackets around the module name are trimmed.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
tools/bpf/bpftool/xlated_dumper.c | 6 +++++-
tools/bpf/bpftool/xlated_dumper.h | 2 ++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
index da608e1..567f56d 100644
--- a/tools/bpf/bpftool/xlated_dumper.c
+++ b/tools/bpf/bpftool/xlated_dumper.c
@@ -46,7 +46,11 @@ void kernel_syms_load(struct dump_data *dd)
}
dd->sym_mapping = tmp;
sym = &dd->sym_mapping[dd->sym_count];
- if (sscanf(buff, "%p %*c %s", &address, sym->name) != 2)
+
+ /* module is optional */
+ sym->module[0] = '\0';
+ /* trim the square brackets around the module name */
+ if (sscanf(buff, "%p %*c %s [%[^]]s", &address, sym->name, sym->module) < 2)
continue;
sym->address = (unsigned long)address;
if (!strcmp(sym->name, "__bpf_call_base")) {
diff --git a/tools/bpf/bpftool/xlated_dumper.h b/tools/bpf/bpftool/xlated_dumper.h
index 9a94637..db3ba067 100644
--- a/tools/bpf/bpftool/xlated_dumper.h
+++ b/tools/bpf/bpftool/xlated_dumper.h
@@ -5,12 +5,14 @@
#define __BPF_TOOL_XLATED_DUMPER_H
#define SYM_MAX_NAME 256
+#define MODULE_MAX_NAME 64
struct bpf_prog_linfo;
struct kernel_sym {
unsigned long address;
char name[SYM_MAX_NAME];
+ char module[MODULE_MAX_NAME];
};
struct dump_data {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 02/11] bpftool: Dump the kernel symbol's module name
2023-06-23 14:15 ` [PATCH v5 bpf-next 02/11] bpftool: Dump the kernel symbol's module name Yafang Shao
@ 2023-06-23 16:48 ` Quentin Monnet
0 siblings, 0 replies; 25+ messages in thread
From: Quentin Monnet @ 2023-06-23 16:48 UTC (permalink / raw)
To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
song, yhs, kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat
Cc: bpf, linux-trace-kernel
2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> If the kernel symbol is in a module, we will dump the module name as
> well. The square brackets around the module name are trimmed.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
> tools/bpf/bpftool/xlated_dumper.c | 6 +++++-
> tools/bpf/bpftool/xlated_dumper.h | 2 ++
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
> index da608e1..567f56d 100644
> --- a/tools/bpf/bpftool/xlated_dumper.c
> +++ b/tools/bpf/bpftool/xlated_dumper.c
> @@ -46,7 +46,11 @@ void kernel_syms_load(struct dump_data *dd)
> }
> dd->sym_mapping = tmp;
> sym = &dd->sym_mapping[dd->sym_count];
> - if (sscanf(buff, "%p %*c %s", &address, sym->name) != 2)
> +
> + /* module is optional */
> + sym->module[0] = '\0';
> + /* trim the square brackets around the module name */
> + if (sscanf(buff, "%p %*c %s [%[^]]s", &address, sym->name, sym->module) < 2)
> continue;
> sym->address = (unsigned long)address;
> if (!strcmp(sym->name, "__bpf_call_base")) {
> diff --git a/tools/bpf/bpftool/xlated_dumper.h b/tools/bpf/bpftool/xlated_dumper.h
> index 9a94637..db3ba067 100644
> --- a/tools/bpf/bpftool/xlated_dumper.h
> +++ b/tools/bpf/bpftool/xlated_dumper.h
> @@ -5,12 +5,14 @@
> #define __BPF_TOOL_XLATED_DUMPER_H
>
> #define SYM_MAX_NAME 256
> +#define MODULE_MAX_NAME 64
>
> struct bpf_prog_linfo;
>
> struct kernel_sym {
> unsigned long address;
> char name[SYM_MAX_NAME];
> + char module[MODULE_MAX_NAME];
> };
>
> struct dump_data {
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Thanks!
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 03/11] bpftool: Show kprobe_multi link info
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 01/11] bpf: Support ->fill_link_info for kprobe_multi Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 02/11] bpftool: Dump the kernel symbol's module name Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 16:48 ` Quentin Monnet
2023-06-23 14:15 ` [PATCH v5 bpf-next 04/11] bpf: Protect probed address based on kptr_restrict setting Yafang Shao
` (7 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
Show the already expose kprobe_multi link info in bpftool. The result as
follows,
$ tools/bpf/bpftool/bpftool link show
4: kprobe_multi prog 22
kprobe.multi func_cnt 7
addr func [module]
ffffffffbbc44f20 schedule_timeout_interruptible
ffffffffbbc44f60 schedule_timeout_killable
ffffffffbbc44fa0 schedule_timeout_uninterruptible
ffffffffbbc44fe0 schedule_timeout_idle
ffffffffc08028d0 xfs_trans_get_efd [xfs]
ffffffffc080fa10 xfs_trans_get_buf_map [xfs]
ffffffffc0813320 xfs_trans_get_dqtrx [xfs]
pids kprobe_multi(1434978)
5: kprobe_multi prog 22
kretprobe.multi func_cnt 7
addr func [module]
ffffffffbbc44f20 schedule_timeout_interruptible
ffffffffbbc44f60 schedule_timeout_killable
ffffffffbbc44fa0 schedule_timeout_uninterruptible
ffffffffbbc44fe0 schedule_timeout_idle
ffffffffc08028d0 xfs_trans_get_efd [xfs]
ffffffffc080fa10 xfs_trans_get_buf_map [xfs]
ffffffffc0813320 xfs_trans_get_dqtrx [xfs]
pids kprobe_multi(1434978)
$ tools/bpf/bpftool/bpftool link show -j
[{"id":4,"type":"kprobe_multi","prog_id":22,"retprobe":false,"func_cnt":7,"funcs":[{"addr":18446744072564789024,"func":"schedule_timeout_interruptible","module":""},{"addr":18446744072564789088,"func":"schedule_timeout_killable","module":""},{"addr":18446744072564789152,"func":"schedule_timeout_uninterruptible","module":""},{"addr":18446744072564789216,"func":"schedule_timeout_idle","module":""},{"addr":18446744072644208848,"func":"xfs_trans_get_efd","module":"xfs"},{"addr":18446744072644262416,"func":"xfs_trans_get_buf_map","module":"xfs"},{"addr":18446744072644277024,"func":"xfs_trans_get_dqtrx","module":"xfs"}],"pids":[{"pid":1434978,"comm":"kprobe_multi"}]},{"id":5,"type":"kprobe_multi","prog_id":22,"retprobe":true,"func_cnt":7,"funcs":[{"addr":18446744072564789024,"func":"schedule_timeout_interruptible","module":""},{"addr":18446744072564789088,"func":"schedule_timeout_killable","module":""},{"addr":18446744072564789152,"func":"schedule_timeout_uninterruptible","module":""},{"addr":18446744072564789216,"func":"schedule_timeout_idle","module":""},{"addr":18446744072644208848,"func":"xfs_trans_get_efd","module":"xfs"},{"addr":18446744072644262416,"func":"xfs_trans_get_buf_map","module":"xfs"},{"addr":18446744072644277024,"func":"xfs_trans_get_dqtrx","module":"xfs"}],"pids":[{"pid":1434978,"comm":"kprobe_multi"}]}]
When kptr_restrict is 2, the result is,
$ tools/bpf/bpftool/bpftool link show
4: kprobe_multi prog 22
kprobe.multi func_cnt 7
5: kprobe_multi prog 22
kretprobe.multi func_cnt 7
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
tools/bpf/bpftool/link.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 108 insertions(+), 1 deletion(-)
diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index 2d78607..8461e6d 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -14,8 +14,10 @@
#include "json_writer.h"
#include "main.h"
+#include "xlated_dumper.h"
static struct hashmap *link_table;
+static struct dump_data dd = {};
static int link_parse_fd(int *argc, char ***argv)
{
@@ -166,6 +168,45 @@ static int get_prog_info(int prog_id, struct bpf_prog_info *info)
return err;
}
+static int cmp_u64(const void *A, const void *B)
+{
+ const __u64 *a = A, *b = B;
+
+ return *a - *b;
+}
+
+static void
+show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
+{
+ __u32 i, j = 0;
+ __u64 *addrs;
+
+ jsonw_bool_field(json_wtr, "retprobe",
+ info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
+ jsonw_uint_field(json_wtr, "func_cnt", info->kprobe_multi.count);
+ jsonw_name(json_wtr, "funcs");
+ jsonw_start_array(json_wtr);
+ addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
+ qsort((void *)addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
+
+ /* Load it once for all. */
+ if (!dd.sym_count)
+ kernel_syms_load(&dd);
+ for (i = 0; i < dd.sym_count; i++) {
+ if (dd.sym_mapping[i].address != addrs[j])
+ continue;
+ jsonw_start_object(json_wtr);
+ jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address);
+ jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name);
+ /* Print none if it is vmlinux */
+ jsonw_string_field(json_wtr, "module", dd.sym_mapping[i].module);
+ jsonw_end_object(json_wtr);
+ if (j++ == info->kprobe_multi.count)
+ break;
+ }
+ jsonw_end_array(json_wtr);
+}
+
static int show_link_close_json(int fd, struct bpf_link_info *info)
{
struct bpf_prog_info prog_info;
@@ -218,6 +259,9 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
jsonw_uint_field(json_wtr, "map_id",
info->struct_ops.map_id);
break;
+ case BPF_LINK_TYPE_KPROBE_MULTI:
+ show_kprobe_multi_json(info, json_wtr);
+ break;
default:
break;
}
@@ -351,6 +395,44 @@ void netfilter_dump_plain(const struct bpf_link_info *info)
printf(" flags 0x%x", info->netfilter.flags);
}
+static void show_kprobe_multi_plain(struct bpf_link_info *info)
+{
+ __u32 i, j = 0;
+ __u64 *addrs;
+
+ if (!info->kprobe_multi.count)
+ return;
+
+ if (info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN)
+ printf("\n\tkretprobe.multi ");
+ else
+ printf("\n\tkprobe.multi ");
+ printf("func_cnt %u ", info->kprobe_multi.count);
+ addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
+ qsort((void *)addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
+
+ /* Load it once for all. */
+ if (!dd.sym_count)
+ kernel_syms_load(&dd);
+ if (!dd.sym_count)
+ return;
+
+ printf("\n\t%-16s %s", "addr", "func [module]");
+ for (i = 0; i < dd.sym_count; i++) {
+ if (dd.sym_mapping[i].address != addrs[j])
+ continue;
+ printf("\n\t%016lx %s",
+ dd.sym_mapping[i].address, dd.sym_mapping[i].name);
+ if (dd.sym_mapping[i].module[0] != '\0')
+ printf(" [%s] ", dd.sym_mapping[i].module);
+ else
+ printf(" ");
+
+ if (j++ == info->kprobe_multi.count)
+ break;
+ }
+}
+
static int show_link_close_plain(int fd, struct bpf_link_info *info)
{
struct bpf_prog_info prog_info;
@@ -396,6 +478,9 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
case BPF_LINK_TYPE_NETFILTER:
netfilter_dump_plain(info);
break;
+ case BPF_LINK_TYPE_KPROBE_MULTI:
+ show_kprobe_multi_plain(info);
+ break;
default:
break;
}
@@ -417,7 +502,9 @@ static int do_show_link(int fd)
{
struct bpf_link_info info;
__u32 len = sizeof(info);
+ __u64 *addrs = NULL;
char buf[256];
+ int count;
int err;
memset(&info, 0, sizeof(info));
@@ -441,12 +528,28 @@ static int do_show_link(int fd)
info.iter.target_name_len = sizeof(buf);
goto again;
}
+ if (info.type == BPF_LINK_TYPE_KPROBE_MULTI &&
+ !info.kprobe_multi.addrs) {
+ count = info.kprobe_multi.count;
+ if (count) {
+ addrs = calloc(count, sizeof(__u64));
+ if (!addrs) {
+ p_err("mem alloc failed");
+ close(fd);
+ return -1;
+ }
+ info.kprobe_multi.addrs = (unsigned long)addrs;
+ goto again;
+ }
+ }
if (json_output)
show_link_close_json(fd, &info);
else
show_link_close_plain(fd, &info);
+ if (addrs)
+ free(addrs);
close(fd);
return 0;
}
@@ -471,7 +574,8 @@ static int do_show(int argc, char **argv)
fd = link_parse_fd(&argc, &argv);
if (fd < 0)
return fd;
- return do_show_link(fd);
+ do_show_link(fd);
+ goto out;
}
if (argc)
@@ -510,6 +614,9 @@ static int do_show(int argc, char **argv)
if (show_pinned)
delete_pinned_obj_table(link_table);
+out:
+ if (dd.sym_count)
+ kernel_syms_destroy(&dd);
return errno == ENOENT ? 0 : -1;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 03/11] bpftool: Show kprobe_multi link info
2023-06-23 14:15 ` [PATCH v5 bpf-next 03/11] bpftool: Show kprobe_multi link info Yafang Shao
@ 2023-06-23 16:48 ` Quentin Monnet
2023-06-25 14:29 ` Yafang Shao
0 siblings, 1 reply; 25+ messages in thread
From: Quentin Monnet @ 2023-06-23 16:48 UTC (permalink / raw)
To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
song, yhs, kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat
Cc: bpf, linux-trace-kernel
2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> Show the already expose kprobe_multi link info in bpftool. The result as
> follows,
>
> $ tools/bpf/bpftool/bpftool link show
> 4: kprobe_multi prog 22
> kprobe.multi func_cnt 7
> addr func [module]
> ffffffffbbc44f20 schedule_timeout_interruptible
> ffffffffbbc44f60 schedule_timeout_killable
> ffffffffbbc44fa0 schedule_timeout_uninterruptible
> ffffffffbbc44fe0 schedule_timeout_idle
> ffffffffc08028d0 xfs_trans_get_efd [xfs]
> ffffffffc080fa10 xfs_trans_get_buf_map [xfs]
> ffffffffc0813320 xfs_trans_get_dqtrx [xfs]
> pids kprobe_multi(1434978)
> 5: kprobe_multi prog 22
> kretprobe.multi func_cnt 7
> addr func [module]
> ffffffffbbc44f20 schedule_timeout_interruptible
> ffffffffbbc44f60 schedule_timeout_killable
> ffffffffbbc44fa0 schedule_timeout_uninterruptible
> ffffffffbbc44fe0 schedule_timeout_idle
> ffffffffc08028d0 xfs_trans_get_efd [xfs]
> ffffffffc080fa10 xfs_trans_get_buf_map [xfs]
> ffffffffc0813320 xfs_trans_get_dqtrx [xfs]
> pids kprobe_multi(1434978)
>
> $ tools/bpf/bpftool/bpftool link show -j
> [{"id":4,"type":"kprobe_multi","prog_id":22,"retprobe":false,"func_cnt":7,"funcs":[{"addr":18446744072564789024,"func":"schedule_timeout_interruptible","module":""},{"addr":18446744072564789088,"func":"schedule_timeout_killable","module":""},{"addr":18446744072564789152,"func":"schedule_timeout_uninterruptible","module":""},{"addr":18446744072564789216,"func":"schedule_timeout_idle","module":""},{"addr":18446744072644208848,"func":"xfs_trans_get_efd","module":"xfs"},{"addr":18446744072644262416,"func":"xfs_trans_get_buf_map","module":"xfs"},{"addr":18446744072644277024,"func":"xfs_trans_get_dqtrx","module":"xfs"}],"pids":[{"pid":1434978,"comm":"kprobe_multi"}]},{"id":5,"type":"kprobe_multi","prog_id":22,"retprobe":true,"func_cnt":7,"funcs":[{"addr":18446744072564789024,"func":"schedule_timeout_interruptible","module":""},{"addr":18446744072564789088,"func":"schedule_timeout_killable","module":""},{"addr":18446744072564789152,"func":"schedule_timeout_uninterruptible","module":""},{"addr":18446744072564789216,"func":"schedule_timeout_idle","module":""},{"addr":18446744072644208848,"func":"xfs_trans_get_efd","module":"xfs"},{"addr":18446744072644262416,"func":"xfs_trans_get_buf_map","module":"xfs"},{"addr":18446744072644277024,"func":"xfs_trans_get_dqtrx","module":"xfs"}],"pids":[{"pid":1434978,"comm":"kprobe_multi"}]}]
>
> When kptr_restrict is 2, the result is,
>
> $ tools/bpf/bpftool/bpftool link show
> 4: kprobe_multi prog 22
> kprobe.multi func_cnt 7
> 5: kprobe_multi prog 22
> kretprobe.multi func_cnt 7
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
> tools/bpf/bpftool/link.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 108 insertions(+), 1 deletion(-)
>
> diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
> index 2d78607..8461e6d 100644
> --- a/tools/bpf/bpftool/link.c
> +++ b/tools/bpf/bpftool/link.c
> @@ -14,8 +14,10 @@
>
> #include "json_writer.h"
> #include "main.h"
> +#include "xlated_dumper.h"
>
> static struct hashmap *link_table;
> +static struct dump_data dd = {};
>
> static int link_parse_fd(int *argc, char ***argv)
> {
> @@ -166,6 +168,45 @@ static int get_prog_info(int prog_id, struct bpf_prog_info *info)
> return err;
> }
>
> +static int cmp_u64(const void *A, const void *B)
> +{
> + const __u64 *a = A, *b = B;
> +
> + return *a - *b;
> +}
> +
> +static void
> +show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
> +{
> + __u32 i, j = 0;
> + __u64 *addrs;
> +
> + jsonw_bool_field(json_wtr, "retprobe",
> + info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
> + jsonw_uint_field(json_wtr, "func_cnt", info->kprobe_multi.count);
> + jsonw_name(json_wtr, "funcs");
> + jsonw_start_array(json_wtr);
> + addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
> + qsort((void *)addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
> +
> + /* Load it once for all. */
> + if (!dd.sym_count)
> + kernel_syms_load(&dd);
> + for (i = 0; i < dd.sym_count; i++) {
> + if (dd.sym_mapping[i].address != addrs[j])
> + continue;
> + jsonw_start_object(json_wtr);
> + jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address);
> + jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name);
> + /* Print none if it is vmlinux */
> + jsonw_string_field(json_wtr, "module", dd.sym_mapping[i].module);
We could maybe print null ("jsonw_null(json_wtr);") instead of an empty
string here when we have no module name. Although I'm not sure it
matters too much. Let's see whether the series needs another respin.
Anyway:
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 03/11] bpftool: Show kprobe_multi link info
2023-06-23 16:48 ` Quentin Monnet
@ 2023-06-25 14:29 ` Yafang Shao
0 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-25 14:29 UTC (permalink / raw)
To: Quentin Monnet
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat, bpf,
linux-trace-kernel
On Sat, Jun 24, 2023 at 12:49 AM Quentin Monnet <quentin@isovalent.com> wrote:
>
> 2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> > Show the already expose kprobe_multi link info in bpftool. The result as
> > follows,
> >
> > $ tools/bpf/bpftool/bpftool link show
> > 4: kprobe_multi prog 22
> > kprobe.multi func_cnt 7
> > addr func [module]
> > ffffffffbbc44f20 schedule_timeout_interruptible
> > ffffffffbbc44f60 schedule_timeout_killable
> > ffffffffbbc44fa0 schedule_timeout_uninterruptible
> > ffffffffbbc44fe0 schedule_timeout_idle
> > ffffffffc08028d0 xfs_trans_get_efd [xfs]
> > ffffffffc080fa10 xfs_trans_get_buf_map [xfs]
> > ffffffffc0813320 xfs_trans_get_dqtrx [xfs]
> > pids kprobe_multi(1434978)
> > 5: kprobe_multi prog 22
> > kretprobe.multi func_cnt 7
> > addr func [module]
> > ffffffffbbc44f20 schedule_timeout_interruptible
> > ffffffffbbc44f60 schedule_timeout_killable
> > ffffffffbbc44fa0 schedule_timeout_uninterruptible
> > ffffffffbbc44fe0 schedule_timeout_idle
> > ffffffffc08028d0 xfs_trans_get_efd [xfs]
> > ffffffffc080fa10 xfs_trans_get_buf_map [xfs]
> > ffffffffc0813320 xfs_trans_get_dqtrx [xfs]
> > pids kprobe_multi(1434978)
> >
> > $ tools/bpf/bpftool/bpftool link show -j
> > [{"id":4,"type":"kprobe_multi","prog_id":22,"retprobe":false,"func_cnt":7,"funcs":[{"addr":18446744072564789024,"func":"schedule_timeout_interruptible","module":""},{"addr":18446744072564789088,"func":"schedule_timeout_killable","module":""},{"addr":18446744072564789152,"func":"schedule_timeout_uninterruptible","module":""},{"addr":18446744072564789216,"func":"schedule_timeout_idle","module":""},{"addr":18446744072644208848,"func":"xfs_trans_get_efd","module":"xfs"},{"addr":18446744072644262416,"func":"xfs_trans_get_buf_map","module":"xfs"},{"addr":18446744072644277024,"func":"xfs_trans_get_dqtrx","module":"xfs"}],"pids":[{"pid":1434978,"comm":"kprobe_multi"}]},{"id":5,"type":"kprobe_multi","prog_id":22,"retprobe":true,"func_cnt":7,"funcs":[{"addr":18446744072564789024,"func":"schedule_timeout_interruptible","module":""},{"addr":18446744072564789088,"func":"schedule_timeout_killable","module":""},{"addr":18446744072564789152,"func":"schedule_timeout_uninterruptible","module":""},{"addr":18446744072564789216,"func":"schedule_timeout_idle","module":""},{"addr":18446744072644208848,"func":"xfs_trans_get_efd","module":"xfs"},{"addr":18446744072644262416,"func":"xfs_trans_get_buf_map","module":"xfs"},{"addr":18446744072644277024,"func":"xfs_trans_get_dqtrx","module":"xfs"}],"pids":[{"pid":1434978,"comm":"kprobe_multi"}]}]
> >
> > When kptr_restrict is 2, the result is,
> >
> > $ tools/bpf/bpftool/bpftool link show
> > 4: kprobe_multi prog 22
> > kprobe.multi func_cnt 7
> > 5: kprobe_multi prog 22
> > kretprobe.multi func_cnt 7
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> > tools/bpf/bpftool/link.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 108 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
> > index 2d78607..8461e6d 100644
> > --- a/tools/bpf/bpftool/link.c
> > +++ b/tools/bpf/bpftool/link.c
> > @@ -14,8 +14,10 @@
> >
> > #include "json_writer.h"
> > #include "main.h"
> > +#include "xlated_dumper.h"
> >
> > static struct hashmap *link_table;
> > +static struct dump_data dd = {};
> >
> > static int link_parse_fd(int *argc, char ***argv)
> > {
> > @@ -166,6 +168,45 @@ static int get_prog_info(int prog_id, struct bpf_prog_info *info)
> > return err;
> > }
> >
> > +static int cmp_u64(const void *A, const void *B)
> > +{
> > + const __u64 *a = A, *b = B;
> > +
> > + return *a - *b;
> > +}
> > +
> > +static void
> > +show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
> > +{
> > + __u32 i, j = 0;
> > + __u64 *addrs;
> > +
> > + jsonw_bool_field(json_wtr, "retprobe",
> > + info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
> > + jsonw_uint_field(json_wtr, "func_cnt", info->kprobe_multi.count);
> > + jsonw_name(json_wtr, "funcs");
> > + jsonw_start_array(json_wtr);
> > + addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
> > + qsort((void *)addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
> > +
> > + /* Load it once for all. */
> > + if (!dd.sym_count)
> > + kernel_syms_load(&dd);
> > + for (i = 0; i < dd.sym_count; i++) {
> > + if (dd.sym_mapping[i].address != addrs[j])
> > + continue;
> > + jsonw_start_object(json_wtr);
> > + jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address);
> > + jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name);
> > + /* Print none if it is vmlinux */
> > + jsonw_string_field(json_wtr, "module", dd.sym_mapping[i].module);
>
> We could maybe print null ("jsonw_null(json_wtr);") instead of an empty
> string here when we have no module name. Although I'm not sure it
> matters too much. Let's see whether the series needs another respin.
Will change it. Thanks for your review.
>
> Anyway:
>
> Reviewed-by: Quentin Monnet <quentin@isovalent.com>
--
Regards
Yafang
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 04/11] bpf: Protect probed address based on kptr_restrict setting
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (2 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 03/11] bpftool: Show kprobe_multi link info Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 05/11] bpf: Clear the probe_addr for uprobe Yafang Shao
` (6 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
The probed address can be accessed by userspace through querying the task
file descriptor (fd). However, it is crucial to adhere to the kptr_restrict
setting and refrain from exposing the address if it is not permitted.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
kernel/trace/trace_kprobe.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 59cda19..e4554db 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1551,7 +1551,10 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
} else {
*symbol = NULL;
*probe_offset = 0;
- *probe_addr = (unsigned long)tk->rp.kp.addr;
+ if (kallsyms_show_value(current_cred()))
+ *probe_addr = (unsigned long)tk->rp.kp.addr;
+ else
+ *probe_addr = 0;
}
return 0;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 05/11] bpf: Clear the probe_addr for uprobe
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (3 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 04/11] bpf: Protect probed address based on kptr_restrict setting Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 06/11] bpf: Expose symbol's respective address Yafang Shao
` (5 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
To avoid returning uninitialized or random values when querying the file
descriptor (fd) and accessing probe_addr, it is necessary to clear the
variable prior to its use.
Fixes: 41bdc4b40ed6 ("bpf: introduce bpf subcommand BPF_TASK_FD_QUERY")
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
kernel/trace/bpf_trace.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 2123197b..45ee111 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2372,10 +2372,12 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
event->attr.type == PERF_TYPE_TRACEPOINT);
#endif
#ifdef CONFIG_UPROBE_EVENTS
- if (flags & TRACE_EVENT_FL_UPROBE)
+ if (flags & TRACE_EVENT_FL_UPROBE) {
err = bpf_get_uprobe_info(event, fd_type, buf,
probe_offset,
event->attr.type == PERF_TYPE_TRACEPOINT);
+ *probe_addr = 0x0;
+ }
#endif
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 06/11] bpf: Expose symbol's respective address
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (4 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 05/11] bpf: Clear the probe_addr for uprobe Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 07/11] bpf: Add a common helper bpf_copy_to_user() Yafang Shao
` (4 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
Since different symbols can share the same name, it is insufficient to only
expose the symbol name. It is essential to also expose the symbol address
so that users can accurately identify which one is being probed.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
kernel/trace/trace_kprobe.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index e4554db..17e1729 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1547,15 +1547,15 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
if (tk->symbol) {
*symbol = tk->symbol;
*probe_offset = tk->rp.kp.offset;
- *probe_addr = 0;
} else {
*symbol = NULL;
*probe_offset = 0;
- if (kallsyms_show_value(current_cred()))
- *probe_addr = (unsigned long)tk->rp.kp.addr;
- else
- *probe_addr = 0;
}
+
+ if (kallsyms_show_value(current_cred()))
+ *probe_addr = (unsigned long)tk->rp.kp.addr;
+ else
+ *probe_addr = 0;
return 0;
}
#endif /* CONFIG_PERF_EVENTS */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 07/11] bpf: Add a common helper bpf_copy_to_user()
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (5 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 06/11] bpf: Expose symbol's respective address Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 08/11] bpf: Add bpf_perf_link_fill_common() Yafang Shao
` (3 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
Add a common helper bpf_copy_to_user(), which will be used at multiple
places.
No functional change.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
kernel/bpf/syscall.c | 34 ++++++++++++++++++++--------------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a75c54b..f3e2d4e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3291,6 +3291,25 @@ static void bpf_raw_tp_link_show_fdinfo(const struct bpf_link *link,
raw_tp_link->btp->tp->name);
}
+static int bpf_copy_to_user(char __user *ubuf, const char *buf, u32 ulen,
+ u32 len)
+{
+ if (ulen >= len + 1) {
+ if (copy_to_user(ubuf, buf, len + 1))
+ return -EFAULT;
+ } else {
+ char zero = '\0';
+
+ if (copy_to_user(ubuf, buf, ulen - 1))
+ return -EFAULT;
+ if (put_user(zero, ubuf + ulen - 1))
+ return -EFAULT;
+ return -ENOSPC;
+ }
+
+ return 0;
+}
+
static int bpf_raw_tp_link_fill_link_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
@@ -3309,20 +3328,7 @@ static int bpf_raw_tp_link_fill_link_info(const struct bpf_link *link,
if (!ubuf)
return 0;
- if (ulen >= tp_len + 1) {
- if (copy_to_user(ubuf, tp_name, tp_len + 1))
- return -EFAULT;
- } else {
- char zero = '\0';
-
- if (copy_to_user(ubuf, tp_name, ulen - 1))
- return -EFAULT;
- if (put_user(zero, ubuf + ulen - 1))
- return -EFAULT;
- return -ENOSPC;
- }
-
- return 0;
+ return bpf_copy_to_user(ubuf, tp_name, ulen, tp_len);
}
static const struct bpf_link_ops bpf_raw_tp_link_lops = {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 08/11] bpf: Add bpf_perf_link_fill_common()
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (6 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 07/11] bpf: Add a common helper bpf_copy_to_user() Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 09/11] bpf: Support ->fill_link_info for perf_event Yafang Shao
` (2 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
Add a new helper bpf_perf_link_fill_common(), which will be used by
perf_link based tracepoint, kprobe and uprobe.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
kernel/bpf/syscall.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index f3e2d4e..c863d39 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3360,6 +3360,40 @@ static void bpf_perf_link_dealloc(struct bpf_link *link)
kfree(perf_link);
}
+static int bpf_perf_link_fill_common(const struct perf_event *event,
+ char __user *uname, u32 ulen,
+ u64 *probe_offset, u64 *probe_addr,
+ u32 *fd_type)
+{
+ const char *buf;
+ u32 prog_id;
+ size_t len;
+ int err;
+
+ if (!ulen ^ !uname)
+ return -EINVAL;
+ if (!uname)
+ return 0;
+
+ err = bpf_get_perf_event_info(event, &prog_id, fd_type, &buf,
+ probe_offset, probe_addr);
+ if (err)
+ return err;
+
+ len = strlen(buf);
+ if (buf) {
+ err = bpf_copy_to_user(uname, buf, ulen, len);
+ if (err)
+ return err;
+ } else {
+ char zero = '\0';
+
+ if (put_user(zero, uname))
+ return -EFAULT;
+ }
+ return 0;
+}
+
static const struct bpf_link_ops bpf_perf_link_lops = {
.release = bpf_perf_link_release,
.dealloc = bpf_perf_link_dealloc,
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 09/11] bpf: Support ->fill_link_info for perf_event
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (7 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 08/11] bpf: Add bpf_perf_link_fill_common() Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 21:55 ` Andrii Nakryiko
2023-06-23 14:15 ` [PATCH v5 bpf-next 10/11] bpftool: Add perf event names Yafang Shao
2023-06-23 14:15 ` [PATCH v5 bpf-next 11/11] bpftool: Show perf link info Yafang Shao
10 siblings, 1 reply; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
By introducing support for ->fill_link_info to the perf_event link, users
gain the ability to inspect it using `bpftool link show`. While the current
approach involves accessing this information via `bpftool perf show`,
consolidating link information for all link types in one place offers
greater convenience. Additionally, this patch extends support to the
generic perf event, which is not currently accommodated by
`bpftool perf show`. While only the perf type and config are exposed to
userspace, other attributes such as sample_period and sample_freq are
ignored. It's important to note that if kptr_restrict is not permitted, the
probed address will not be exposed, maintaining security measures.
A new enum bpf_perf_event_type is introduced to help the user understand
which struct is relevant.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
include/uapi/linux/bpf.h | 35 +++++++++++++
kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 35 +++++++++++++
3 files changed, 185 insertions(+)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 23691ea..1c579d5 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1056,6 +1056,14 @@ enum bpf_link_type {
MAX_BPF_LINK_TYPE,
};
+enum bpf_perf_event_type {
+ BPF_PERF_EVENT_UNSPEC = 0,
+ BPF_PERF_EVENT_UPROBE = 1,
+ BPF_PERF_EVENT_KPROBE = 2,
+ BPF_PERF_EVENT_TRACEPOINT = 3,
+ BPF_PERF_EVENT_EVENT = 4,
+};
+
/* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
*
* NONE(default): No further bpf programs allowed in the subtree.
@@ -6443,6 +6451,33 @@ struct bpf_link_info {
__u32 count;
__u32 flags;
} kprobe_multi;
+ struct {
+ __u32 type; /* enum bpf_perf_event_type */
+ __u32 :32;
+ union {
+ struct {
+ __aligned_u64 file_name; /* in/out */
+ __u32 name_len;
+ __u32 offset;/* offset from file_name */
+ __u32 flags;
+ } uprobe; /* BPF_PERF_EVENT_UPROBE */
+ struct {
+ __aligned_u64 func_name; /* in/out */
+ __u32 name_len;
+ __u32 offset;/* offset from func_name */
+ __u64 addr;
+ __u32 flags;
+ } kprobe; /* BPF_PERF_EVENT_KPROBE */
+ struct {
+ __aligned_u64 tp_name; /* in/out */
+ __u32 name_len;
+ } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
+ struct {
+ __u64 config;
+ __u32 type;
+ } event; /* BPF_PERF_EVENT_EVENT */
+ };
+ } perf_event;
};
} __attribute__((aligned(8)));
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index c863d39..02dad3c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
return 0;
}
+#ifdef CONFIG_KPROBE_EVENTS
+static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
+ struct bpf_link_info *info)
+{
+ char __user *uname;
+ u64 addr, offset;
+ u32 ulen, type;
+ int err;
+
+ uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
+ ulen = info->perf_event.kprobe.name_len;
+ info->perf_event.type = BPF_PERF_EVENT_KPROBE;
+ err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
+ &type);
+ if (err)
+ return err;
+
+ info->perf_event.kprobe.offset = offset;
+ if (type == BPF_FD_TYPE_KRETPROBE)
+ info->perf_event.kprobe.flags = 1;
+ if (!kallsyms_show_value(current_cred()))
+ return 0;
+ info->perf_event.kprobe.addr = addr;
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_UPROBE_EVENTS
+static int bpf_perf_link_fill_uprobe(const struct perf_event *event,
+ struct bpf_link_info *info)
+{
+ char __user *uname;
+ u64 addr, offset;
+ u32 ulen, type;
+ int err;
+
+ uname = u64_to_user_ptr(info->perf_event.uprobe.file_name);
+ ulen = info->perf_event.uprobe.name_len;
+ info->perf_event.type = BPF_PERF_EVENT_UPROBE;
+ err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
+ &type);
+ if (err)
+ return err;
+
+ info->perf_event.uprobe.offset = offset;
+ if (type == BPF_FD_TYPE_URETPROBE)
+ info->perf_event.uprobe.flags = 1;
+ return 0;
+}
+#endif
+
+static int bpf_perf_link_fill_probe(const struct perf_event *event,
+ struct bpf_link_info *info)
+{
+#ifdef CONFIG_KPROBE_EVENTS
+ if (event->tp_event->flags & TRACE_EVENT_FL_KPROBE)
+ return bpf_perf_link_fill_kprobe(event, info);
+#endif
+#ifdef CONFIG_UPROBE_EVENTS
+ if (event->tp_event->flags & TRACE_EVENT_FL_UPROBE)
+ return bpf_perf_link_fill_uprobe(event, info);
+#endif
+ return -EOPNOTSUPP;
+}
+
+static int bpf_perf_link_fill_tracepoint(const struct perf_event *event,
+ struct bpf_link_info *info)
+{
+ char __user *uname;
+ u64 addr, offset;
+ u32 ulen, type;
+
+ uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name);
+ ulen = info->perf_event.tracepoint.name_len;
+ info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT;
+ return bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
+ &type);
+}
+
+static int bpf_perf_link_fill_perf_event(const struct perf_event *event,
+ struct bpf_link_info *info)
+{
+ info->perf_event.event.type = event->attr.type;
+ info->perf_event.event.config = event->attr.config;
+ info->perf_event.type = BPF_PERF_EVENT_EVENT;
+ return 0;
+}
+
+static int bpf_perf_link_fill_link_info(const struct bpf_link *link,
+ struct bpf_link_info *info)
+{
+ struct bpf_perf_link *perf_link;
+ const struct perf_event *event;
+
+ perf_link = container_of(link, struct bpf_perf_link, link);
+ event = perf_get_event(perf_link->perf_file);
+ if (IS_ERR(event))
+ return PTR_ERR(event);
+
+ if (!event->prog)
+ return -EINVAL;
+
+ switch (event->prog->type) {
+ case BPF_PROG_TYPE_PERF_EVENT:
+ return bpf_perf_link_fill_perf_event(event, info);
+ case BPF_PROG_TYPE_TRACEPOINT:
+ return bpf_perf_link_fill_tracepoint(event, info);
+ case BPF_PROG_TYPE_KPROBE:
+ return bpf_perf_link_fill_probe(event, info);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static const struct bpf_link_ops bpf_perf_link_lops = {
.release = bpf_perf_link_release,
.dealloc = bpf_perf_link_dealloc,
+ .fill_link_info = bpf_perf_link_fill_link_info,
};
static int bpf_perf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 23691ea..1c579d5 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1056,6 +1056,14 @@ enum bpf_link_type {
MAX_BPF_LINK_TYPE,
};
+enum bpf_perf_event_type {
+ BPF_PERF_EVENT_UNSPEC = 0,
+ BPF_PERF_EVENT_UPROBE = 1,
+ BPF_PERF_EVENT_KPROBE = 2,
+ BPF_PERF_EVENT_TRACEPOINT = 3,
+ BPF_PERF_EVENT_EVENT = 4,
+};
+
/* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
*
* NONE(default): No further bpf programs allowed in the subtree.
@@ -6443,6 +6451,33 @@ struct bpf_link_info {
__u32 count;
__u32 flags;
} kprobe_multi;
+ struct {
+ __u32 type; /* enum bpf_perf_event_type */
+ __u32 :32;
+ union {
+ struct {
+ __aligned_u64 file_name; /* in/out */
+ __u32 name_len;
+ __u32 offset;/* offset from file_name */
+ __u32 flags;
+ } uprobe; /* BPF_PERF_EVENT_UPROBE */
+ struct {
+ __aligned_u64 func_name; /* in/out */
+ __u32 name_len;
+ __u32 offset;/* offset from func_name */
+ __u64 addr;
+ __u32 flags;
+ } kprobe; /* BPF_PERF_EVENT_KPROBE */
+ struct {
+ __aligned_u64 tp_name; /* in/out */
+ __u32 name_len;
+ } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
+ struct {
+ __u64 config;
+ __u32 type;
+ } event; /* BPF_PERF_EVENT_EVENT */
+ };
+ } perf_event;
};
} __attribute__((aligned(8)));
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 09/11] bpf: Support ->fill_link_info for perf_event
2023-06-23 14:15 ` [PATCH v5 bpf-next 09/11] bpf: Support ->fill_link_info for perf_event Yafang Shao
@ 2023-06-23 21:55 ` Andrii Nakryiko
2023-06-25 14:35 ` Yafang Shao
0 siblings, 1 reply; 25+ messages in thread
From: Andrii Nakryiko @ 2023-06-23 21:55 UTC (permalink / raw)
To: Yafang Shao
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat, bpf,
linux-trace-kernel
On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> By introducing support for ->fill_link_info to the perf_event link, users
> gain the ability to inspect it using `bpftool link show`. While the current
> approach involves accessing this information via `bpftool perf show`,
> consolidating link information for all link types in one place offers
> greater convenience. Additionally, this patch extends support to the
> generic perf event, which is not currently accommodated by
> `bpftool perf show`. While only the perf type and config are exposed to
> userspace, other attributes such as sample_period and sample_freq are
> ignored. It's important to note that if kptr_restrict is not permitted, the
> probed address will not be exposed, maintaining security measures.
>
> A new enum bpf_perf_event_type is introduced to help the user understand
> which struct is relevant.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
> include/uapi/linux/bpf.h | 35 +++++++++++++
> kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 35 +++++++++++++
> 3 files changed, 185 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 23691ea..1c579d5 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1056,6 +1056,14 @@ enum bpf_link_type {
> MAX_BPF_LINK_TYPE,
> };
>
> +enum bpf_perf_event_type {
> + BPF_PERF_EVENT_UNSPEC = 0,
> + BPF_PERF_EVENT_UPROBE = 1,
> + BPF_PERF_EVENT_KPROBE = 2,
> + BPF_PERF_EVENT_TRACEPOINT = 3,
> + BPF_PERF_EVENT_EVENT = 4,
> +};
> +
> /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
> *
> * NONE(default): No further bpf programs allowed in the subtree.
> @@ -6443,6 +6451,33 @@ struct bpf_link_info {
> __u32 count;
> __u32 flags;
> } kprobe_multi;
> + struct {
> + __u32 type; /* enum bpf_perf_event_type */
> + __u32 :32;
> + union {
> + struct {
> + __aligned_u64 file_name; /* in/out */
> + __u32 name_len;
> + __u32 offset;/* offset from file_name */
> + __u32 flags;
> + } uprobe; /* BPF_PERF_EVENT_UPROBE */
> + struct {
> + __aligned_u64 func_name; /* in/out */
> + __u32 name_len;
> + __u32 offset;/* offset from func_name */
> + __u64 addr;
> + __u32 flags;
> + } kprobe; /* BPF_PERF_EVENT_KPROBE */
> + struct {
> + __aligned_u64 tp_name; /* in/out */
> + __u32 name_len;
> + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
> + struct {
> + __u64 config;
> + __u32 type;
> + } event; /* BPF_PERF_EVENT_EVENT */
> + };
> + } perf_event;
> };
> } __attribute__((aligned(8)));
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index c863d39..02dad3c 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
> return 0;
> }
>
> +#ifdef CONFIG_KPROBE_EVENTS
> +static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
> + struct bpf_link_info *info)
> +{
> + char __user *uname;
> + u64 addr, offset;
> + u32 ulen, type;
> + int err;
> +
> + uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
> + ulen = info->perf_event.kprobe.name_len;
> + info->perf_event.type = BPF_PERF_EVENT_KPROBE;
> + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
> + &type);
> + if (err)
> + return err;
> +
> + info->perf_event.kprobe.offset = offset;
> + if (type == BPF_FD_TYPE_KRETPROBE)
> + info->perf_event.kprobe.flags = 1;
hm... ok, sorry, I didn't realize that these flags are not part of
UAPI. I don't think just randomly defining 1 to mean retprobe is a
good approach. Let's drop flags if there are actually no flags.
How about in addition to BPF_PERF_EVENT_UPROBE add
BPF_PERF_EVENT_URETPROBE, and for BPF_PERF_EVENT_KPROBE add also
BPF_PERF_EVENT_KRETPROBE. They will share respective perf_event.uprobe
and perf_event.kprobe sections in bpf_link_info.
It seems consistent with what we did for bpf_task_fd_type enum.
> + if (!kallsyms_show_value(current_cred()))
> + return 0;
> + info->perf_event.kprobe.addr = addr;
> + return 0;
> +}
> +#endif
> +
[...]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 09/11] bpf: Support ->fill_link_info for perf_event
2023-06-23 21:55 ` Andrii Nakryiko
@ 2023-06-25 14:35 ` Yafang Shao
0 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-25 14:35 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat, bpf,
linux-trace-kernel
On Sat, Jun 24, 2023 at 5:55 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> >
> > By introducing support for ->fill_link_info to the perf_event link, users
> > gain the ability to inspect it using `bpftool link show`. While the current
> > approach involves accessing this information via `bpftool perf show`,
> > consolidating link information for all link types in one place offers
> > greater convenience. Additionally, this patch extends support to the
> > generic perf event, which is not currently accommodated by
> > `bpftool perf show`. While only the perf type and config are exposed to
> > userspace, other attributes such as sample_period and sample_freq are
> > ignored. It's important to note that if kptr_restrict is not permitted, the
> > probed address will not be exposed, maintaining security measures.
> >
> > A new enum bpf_perf_event_type is introduced to help the user understand
> > which struct is relevant.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> > include/uapi/linux/bpf.h | 35 +++++++++++++
> > kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++
> > tools/include/uapi/linux/bpf.h | 35 +++++++++++++
> > 3 files changed, 185 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 23691ea..1c579d5 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -1056,6 +1056,14 @@ enum bpf_link_type {
> > MAX_BPF_LINK_TYPE,
> > };
> >
> > +enum bpf_perf_event_type {
> > + BPF_PERF_EVENT_UNSPEC = 0,
> > + BPF_PERF_EVENT_UPROBE = 1,
> > + BPF_PERF_EVENT_KPROBE = 2,
> > + BPF_PERF_EVENT_TRACEPOINT = 3,
> > + BPF_PERF_EVENT_EVENT = 4,
> > +};
> > +
> > /* cgroup-bpf attach flags used in BPF_PROG_ATTACH command
> > *
> > * NONE(default): No further bpf programs allowed in the subtree.
> > @@ -6443,6 +6451,33 @@ struct bpf_link_info {
> > __u32 count;
> > __u32 flags;
> > } kprobe_multi;
> > + struct {
> > + __u32 type; /* enum bpf_perf_event_type */
> > + __u32 :32;
> > + union {
> > + struct {
> > + __aligned_u64 file_name; /* in/out */
> > + __u32 name_len;
> > + __u32 offset;/* offset from file_name */
> > + __u32 flags;
> > + } uprobe; /* BPF_PERF_EVENT_UPROBE */
> > + struct {
> > + __aligned_u64 func_name; /* in/out */
> > + __u32 name_len;
> > + __u32 offset;/* offset from func_name */
> > + __u64 addr;
> > + __u32 flags;
> > + } kprobe; /* BPF_PERF_EVENT_KPROBE */
> > + struct {
> > + __aligned_u64 tp_name; /* in/out */
> > + __u32 name_len;
> > + } tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
> > + struct {
> > + __u64 config;
> > + __u32 type;
> > + } event; /* BPF_PERF_EVENT_EVENT */
> > + };
> > + } perf_event;
> > };
> > } __attribute__((aligned(8)));
> >
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index c863d39..02dad3c 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -3394,9 +3394,124 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
> > return 0;
> > }
> >
> > +#ifdef CONFIG_KPROBE_EVENTS
> > +static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
> > + struct bpf_link_info *info)
> > +{
> > + char __user *uname;
> > + u64 addr, offset;
> > + u32 ulen, type;
> > + int err;
> > +
> > + uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
> > + ulen = info->perf_event.kprobe.name_len;
> > + info->perf_event.type = BPF_PERF_EVENT_KPROBE;
> > + err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
> > + &type);
> > + if (err)
> > + return err;
> > +
> > + info->perf_event.kprobe.offset = offset;
> > + if (type == BPF_FD_TYPE_KRETPROBE)
> > + info->perf_event.kprobe.flags = 1;
>
> hm... ok, sorry, I didn't realize that these flags are not part of
> UAPI. I don't think just randomly defining 1 to mean retprobe is a
> good approach. Let's drop flags if there are actually no flags.
>
> How about in addition to BPF_PERF_EVENT_UPROBE add
> BPF_PERF_EVENT_URETPROBE, and for BPF_PERF_EVENT_KPROBE add also
> BPF_PERF_EVENT_KRETPROBE. They will share respective perf_event.uprobe
> and perf_event.kprobe sections in bpf_link_info.
>
> It seems consistent with what we did for bpf_task_fd_type enum.
Good idea. Will do it.
--
Regards
Yafang
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 10/11] bpftool: Add perf event names
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (8 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 09/11] bpf: Support ->fill_link_info for perf_event Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 16:49 ` Quentin Monnet
2023-06-23 14:15 ` [PATCH v5 bpf-next 11/11] bpftool: Show perf link info Yafang Shao
10 siblings, 1 reply; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao, Jiri Olsa
Add new functions and macros to get perf event names. These names are
copied from tool/perf/util/{parse-events,evsel}.c, so that in the future we
will have a good chance to use the same code.
Suggested-by: Jiri Olsa <olsajiri@gmail.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
tools/bpf/bpftool/link.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index 8461e6d..e5aeee3 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -5,6 +5,7 @@
#include <linux/err.h>
#include <linux/netfilter.h>
#include <linux/netfilter_arp.h>
+#include <linux/perf_event.h>
#include <net/if.h>
#include <stdio.h>
#include <unistd.h>
@@ -19,6 +20,72 @@
static struct hashmap *link_table;
static struct dump_data dd = {};
+static const char *perf_type_name[PERF_TYPE_MAX] = {
+ [PERF_TYPE_HARDWARE] = "hardware",
+ [PERF_TYPE_SOFTWARE] = "software",
+ [PERF_TYPE_TRACEPOINT] = "tracepoint",
+ [PERF_TYPE_HW_CACHE] = "hw-cache",
+ [PERF_TYPE_RAW] = "raw",
+ [PERF_TYPE_BREAKPOINT] = "breakpoint",
+};
+
+const char *event_symbols_hw[PERF_COUNT_HW_MAX] = {
+ [PERF_COUNT_HW_CPU_CYCLES] = "cpu-cycles",
+ [PERF_COUNT_HW_INSTRUCTIONS] = "instructions",
+ [PERF_COUNT_HW_CACHE_REFERENCES] = "cache-references",
+ [PERF_COUNT_HW_CACHE_MISSES] = "cache-misses",
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = "branch-instructions",
+ [PERF_COUNT_HW_BRANCH_MISSES] = "branch-misses",
+ [PERF_COUNT_HW_BUS_CYCLES] = "bus-cycles",
+ [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = "stalled-cycles-frontend",
+ [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = "stalled-cycles-backend",
+ [PERF_COUNT_HW_REF_CPU_CYCLES] = "ref-cycles",
+};
+
+const char *event_symbols_sw[PERF_COUNT_SW_MAX] = {
+ [PERF_COUNT_SW_CPU_CLOCK] = "cpu-clock",
+ [PERF_COUNT_SW_TASK_CLOCK] = "task-clock",
+ [PERF_COUNT_SW_PAGE_FAULTS] = "page-faults",
+ [PERF_COUNT_SW_CONTEXT_SWITCHES] = "context-switches",
+ [PERF_COUNT_SW_CPU_MIGRATIONS] = "cpu-migrations",
+ [PERF_COUNT_SW_PAGE_FAULTS_MIN] = "minor-faults",
+ [PERF_COUNT_SW_PAGE_FAULTS_MAJ] = "major-faults",
+ [PERF_COUNT_SW_ALIGNMENT_FAULTS] = "alignment-faults",
+ [PERF_COUNT_SW_EMULATION_FAULTS] = "emulation-faults",
+ [PERF_COUNT_SW_DUMMY] = "dummy",
+ [PERF_COUNT_SW_BPF_OUTPUT] = "bpf-output",
+ [PERF_COUNT_SW_CGROUP_SWITCHES] = "cgroup-switches",
+};
+
+const char *evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX] = {
+ [PERF_COUNT_HW_CACHE_L1D] = "L1-dcache",
+ [PERF_COUNT_HW_CACHE_L1I] = "L1-icache",
+ [PERF_COUNT_HW_CACHE_LL] = "LLC",
+ [PERF_COUNT_HW_CACHE_DTLB] = "dTLB",
+ [PERF_COUNT_HW_CACHE_ITLB] = "iTLB",
+ [PERF_COUNT_HW_CACHE_BPU] = "branch",
+ [PERF_COUNT_HW_CACHE_NODE] = "node",
+};
+
+const char *evsel__hw_cache_op[PERF_COUNT_HW_CACHE_OP_MAX] = {
+ [PERF_COUNT_HW_CACHE_OP_READ] = "load",
+ [PERF_COUNT_HW_CACHE_OP_WRITE] = "store",
+ [PERF_COUNT_HW_CACHE_OP_PREFETCH] = "prefetch",
+};
+
+const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [PERF_COUNT_HW_CACHE_RESULT_ACCESS] = "refs",
+ [PERF_COUNT_HW_CACHE_RESULT_MISS] = "misses",
+};
+
+#define perf_event_name(array, id) ({ \
+ const char *event_str = NULL; \
+ \
+ if ((id) >= 0 && (id) < ARRAY_SIZE(array)) \
+ event_str = array[id]; \
+ event_str; \
+})
+
static int link_parse_fd(int *argc, char ***argv)
{
int fd;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 10/11] bpftool: Add perf event names
2023-06-23 14:15 ` [PATCH v5 bpf-next 10/11] bpftool: Add perf event names Yafang Shao
@ 2023-06-23 16:49 ` Quentin Monnet
2023-06-25 14:30 ` Yafang Shao
0 siblings, 1 reply; 25+ messages in thread
From: Quentin Monnet @ 2023-06-23 16:49 UTC (permalink / raw)
To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
song, yhs, kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Jiri Olsa
2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> Add new functions and macros to get perf event names. These names are
> copied from tool/perf/util/{parse-events,evsel}.c, so that in the future we
> will have a good chance to use the same code.
>
> Suggested-by: Jiri Olsa <olsajiri@gmail.com>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
> tools/bpf/bpftool/link.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 67 insertions(+)
>
> diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
> index 8461e6d..e5aeee3 100644
> --- a/tools/bpf/bpftool/link.c
> +++ b/tools/bpf/bpftool/link.c
> @@ -5,6 +5,7 @@
> #include <linux/err.h>
> #include <linux/netfilter.h>
> #include <linux/netfilter_arp.h>
> +#include <linux/perf_event.h>
> #include <net/if.h>
> #include <stdio.h>
> #include <unistd.h>
> @@ -19,6 +20,72 @@
> static struct hashmap *link_table;
> static struct dump_data dd = {};
>
> +static const char *perf_type_name[PERF_TYPE_MAX] = {
> + [PERF_TYPE_HARDWARE] = "hardware",
> + [PERF_TYPE_SOFTWARE] = "software",
> + [PERF_TYPE_TRACEPOINT] = "tracepoint",
> + [PERF_TYPE_HW_CACHE] = "hw-cache",
> + [PERF_TYPE_RAW] = "raw",
> + [PERF_TYPE_BREAKPOINT] = "breakpoint",
> +};
These ones (above) are not defined in perf, are they?
> +
> +const char *event_symbols_hw[PERF_COUNT_HW_MAX] = {
> + [PERF_COUNT_HW_CPU_CYCLES] = "cpu-cycles",
> + [PERF_COUNT_HW_INSTRUCTIONS] = "instructions",
> + [PERF_COUNT_HW_CACHE_REFERENCES] = "cache-references",
> + [PERF_COUNT_HW_CACHE_MISSES] = "cache-misses",
> + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = "branch-instructions",
> + [PERF_COUNT_HW_BRANCH_MISSES] = "branch-misses",
> + [PERF_COUNT_HW_BUS_CYCLES] = "bus-cycles",
> + [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = "stalled-cycles-frontend",
> + [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = "stalled-cycles-backend",
> + [PERF_COUNT_HW_REF_CPU_CYCLES] = "ref-cycles",
> +};
> +
> +const char *event_symbols_sw[PERF_COUNT_SW_MAX] = {
> + [PERF_COUNT_SW_CPU_CLOCK] = "cpu-clock",
> + [PERF_COUNT_SW_TASK_CLOCK] = "task-clock",
> + [PERF_COUNT_SW_PAGE_FAULTS] = "page-faults",
> + [PERF_COUNT_SW_CONTEXT_SWITCHES] = "context-switches",
> + [PERF_COUNT_SW_CPU_MIGRATIONS] = "cpu-migrations",
> + [PERF_COUNT_SW_PAGE_FAULTS_MIN] = "minor-faults",
> + [PERF_COUNT_SW_PAGE_FAULTS_MAJ] = "major-faults",
> + [PERF_COUNT_SW_ALIGNMENT_FAULTS] = "alignment-faults",
> + [PERF_COUNT_SW_EMULATION_FAULTS] = "emulation-faults",
> + [PERF_COUNT_SW_DUMMY] = "dummy",
> + [PERF_COUNT_SW_BPF_OUTPUT] = "bpf-output",
> + [PERF_COUNT_SW_CGROUP_SWITCHES] = "cgroup-switches",
> +};
> +
> +const char *evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX] = {
> + [PERF_COUNT_HW_CACHE_L1D] = "L1-dcache",
> + [PERF_COUNT_HW_CACHE_L1I] = "L1-icache",
> + [PERF_COUNT_HW_CACHE_LL] = "LLC",
> + [PERF_COUNT_HW_CACHE_DTLB] = "dTLB",
> + [PERF_COUNT_HW_CACHE_ITLB] = "iTLB",
> + [PERF_COUNT_HW_CACHE_BPU] = "branch",
> + [PERF_COUNT_HW_CACHE_NODE] = "node",
> +};
> +
> +const char *evsel__hw_cache_op[PERF_COUNT_HW_CACHE_OP_MAX] = {
> + [PERF_COUNT_HW_CACHE_OP_READ] = "load",
> + [PERF_COUNT_HW_CACHE_OP_WRITE] = "store",
> + [PERF_COUNT_HW_CACHE_OP_PREFETCH] = "prefetch",
> +};
> +
> +const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
> + [PERF_COUNT_HW_CACHE_RESULT_ACCESS] = "refs",
> + [PERF_COUNT_HW_CACHE_RESULT_MISS] = "misses",
> +};
> +
> +#define perf_event_name(array, id) ({ \
> + const char *event_str = NULL; \
> + \
> + if ((id) >= 0 && (id) < ARRAY_SIZE(array)) \
> + event_str = array[id]; \
> + event_str; \
> +})
> +
> static int link_parse_fd(int *argc, char ***argv)
> {
> int fd;
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 10/11] bpftool: Add perf event names
2023-06-23 16:49 ` Quentin Monnet
@ 2023-06-25 14:30 ` Yafang Shao
0 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-25 14:30 UTC (permalink / raw)
To: Quentin Monnet
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat, bpf,
linux-trace-kernel, Jiri Olsa
On Sat, Jun 24, 2023 at 12:49 AM Quentin Monnet <quentin@isovalent.com> wrote:
>
> 2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> > Add new functions and macros to get perf event names. These names are
> > copied from tool/perf/util/{parse-events,evsel}.c, so that in the future we
> > will have a good chance to use the same code.
> >
> > Suggested-by: Jiri Olsa <olsajiri@gmail.com>
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> > tools/bpf/bpftool/link.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 67 insertions(+)
> >
> > diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
> > index 8461e6d..e5aeee3 100644
> > --- a/tools/bpf/bpftool/link.c
> > +++ b/tools/bpf/bpftool/link.c
> > @@ -5,6 +5,7 @@
> > #include <linux/err.h>
> > #include <linux/netfilter.h>
> > #include <linux/netfilter_arp.h>
> > +#include <linux/perf_event.h>
> > #include <net/if.h>
> > #include <stdio.h>
> > #include <unistd.h>
> > @@ -19,6 +20,72 @@
> > static struct hashmap *link_table;
> > static struct dump_data dd = {};
> >
> > +static const char *perf_type_name[PERF_TYPE_MAX] = {
> > + [PERF_TYPE_HARDWARE] = "hardware",
> > + [PERF_TYPE_SOFTWARE] = "software",
> > + [PERF_TYPE_TRACEPOINT] = "tracepoint",
> > + [PERF_TYPE_HW_CACHE] = "hw-cache",
> > + [PERF_TYPE_RAW] = "raw",
> > + [PERF_TYPE_BREAKPOINT] = "breakpoint",
> > +};
>
> These ones (above) are not defined in perf, are they?
Right. Will add an explanation in the commit log in the next version.
>
> > +
> > +const char *event_symbols_hw[PERF_COUNT_HW_MAX] = {
> > + [PERF_COUNT_HW_CPU_CYCLES] = "cpu-cycles",
> > + [PERF_COUNT_HW_INSTRUCTIONS] = "instructions",
> > + [PERF_COUNT_HW_CACHE_REFERENCES] = "cache-references",
> > + [PERF_COUNT_HW_CACHE_MISSES] = "cache-misses",
> > + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = "branch-instructions",
> > + [PERF_COUNT_HW_BRANCH_MISSES] = "branch-misses",
> > + [PERF_COUNT_HW_BUS_CYCLES] = "bus-cycles",
> > + [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = "stalled-cycles-frontend",
> > + [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = "stalled-cycles-backend",
> > + [PERF_COUNT_HW_REF_CPU_CYCLES] = "ref-cycles",
> > +};
> > +
> > +const char *event_symbols_sw[PERF_COUNT_SW_MAX] = {
> > + [PERF_COUNT_SW_CPU_CLOCK] = "cpu-clock",
> > + [PERF_COUNT_SW_TASK_CLOCK] = "task-clock",
> > + [PERF_COUNT_SW_PAGE_FAULTS] = "page-faults",
> > + [PERF_COUNT_SW_CONTEXT_SWITCHES] = "context-switches",
> > + [PERF_COUNT_SW_CPU_MIGRATIONS] = "cpu-migrations",
> > + [PERF_COUNT_SW_PAGE_FAULTS_MIN] = "minor-faults",
> > + [PERF_COUNT_SW_PAGE_FAULTS_MAJ] = "major-faults",
> > + [PERF_COUNT_SW_ALIGNMENT_FAULTS] = "alignment-faults",
> > + [PERF_COUNT_SW_EMULATION_FAULTS] = "emulation-faults",
> > + [PERF_COUNT_SW_DUMMY] = "dummy",
> > + [PERF_COUNT_SW_BPF_OUTPUT] = "bpf-output",
> > + [PERF_COUNT_SW_CGROUP_SWITCHES] = "cgroup-switches",
> > +};
> > +
> > +const char *evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX] = {
> > + [PERF_COUNT_HW_CACHE_L1D] = "L1-dcache",
> > + [PERF_COUNT_HW_CACHE_L1I] = "L1-icache",
> > + [PERF_COUNT_HW_CACHE_LL] = "LLC",
> > + [PERF_COUNT_HW_CACHE_DTLB] = "dTLB",
> > + [PERF_COUNT_HW_CACHE_ITLB] = "iTLB",
> > + [PERF_COUNT_HW_CACHE_BPU] = "branch",
> > + [PERF_COUNT_HW_CACHE_NODE] = "node",
> > +};
> > +
> > +const char *evsel__hw_cache_op[PERF_COUNT_HW_CACHE_OP_MAX] = {
> > + [PERF_COUNT_HW_CACHE_OP_READ] = "load",
> > + [PERF_COUNT_HW_CACHE_OP_WRITE] = "store",
> > + [PERF_COUNT_HW_CACHE_OP_PREFETCH] = "prefetch",
> > +};
> > +
> > +const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
> > + [PERF_COUNT_HW_CACHE_RESULT_ACCESS] = "refs",
> > + [PERF_COUNT_HW_CACHE_RESULT_MISS] = "misses",
> > +};
> > +
> > +#define perf_event_name(array, id) ({ \
> > + const char *event_str = NULL; \
> > + \
> > + if ((id) >= 0 && (id) < ARRAY_SIZE(array)) \
> > + event_str = array[id]; \
> > + event_str; \
> > +})
> > +
> > static int link_parse_fd(int *argc, char ***argv)
> > {
> > int fd;
>
> Reviewed-by: Quentin Monnet <quentin@isovalent.com>
--
Regards
Yafang
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v5 bpf-next 11/11] bpftool: Show perf link info
2023-06-23 14:15 [PATCH v5 bpf-next 00/11] bpf: Support ->fill_link_info for kprobe_multi and perf_event links Yafang Shao
` (9 preceding siblings ...)
2023-06-23 14:15 ` [PATCH v5 bpf-next 10/11] bpftool: Add perf event names Yafang Shao
@ 2023-06-23 14:15 ` Yafang Shao
2023-06-23 16:49 ` Quentin Monnet
2023-06-23 17:13 ` Alexei Starovoitov
10 siblings, 2 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-23 14:15 UTC (permalink / raw)
To: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, quentin, rostedt, mhiramat
Cc: bpf, linux-trace-kernel, Yafang Shao
Enhance bpftool to display comprehensive information about exposed
perf_event links, covering uprobe, kprobe, tracepoint, and generic perf
event. The resulting output will include the following details:
$ tools/bpf/bpftool/bpftool link show
4: perf_event prog 23
uprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
bpf_cookie 0
pids uprobe(27503)
5: perf_event prog 24
uretprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
bpf_cookie 0
pids uprobe(27503)
6: perf_event prog 31
kprobe ffffffffa90a9660 kernel_clone
bpf_cookie 0
pids kprobe(27777)
7: perf_event prog 30
kretprobe ffffffffa90a9660 kernel_clone
bpf_cookie 0
pids kprobe(27777)
8: perf_event prog 37
tracepoint sched_switch
bpf_cookie 0
pids tracepoint(28036)
9: perf_event prog 43
event software:cpu-clock
bpf_cookie 0
pids perf_event(28261)
10: perf_event prog 43
event hw-cache:LLC-load-misses
bpf_cookie 0
pids perf_event(28261)
11: perf_event prog 43
event hardware:cpu-cycles
bpf_cookie 0
pids perf_event(28261)
$ tools/bpf/bpftool/bpftool link show -j
[{"id":4,"type":"perf_event","prog_id":23,"retprobe":false,"file":"/home/dev/waken/bpf/uprobe/a.out","offset":4920,"bpf_cookie":0,"pids":[{"pid":27503,"comm":"uprobe"}]},{"id":5,"type":"perf_event","prog_id":24,"retprobe":true,"file":"/home/dev/waken/bpf/uprobe/a.out","offset":4920,"bpf_cookie":0,"pids":[{"pid":27503,"comm":"uprobe"}]},{"id":6,"type":"perf_event","prog_id":31,"retprobe":false,"addr":18446744072250627680,"func":"kernel_clone","offset":0,"bpf_cookie":0,"pids":[{"pid":27777,"comm":"kprobe"}]},{"id":7,"type":"perf_event","prog_id":30,"retprobe":true,"addr":18446744072250627680,"func":"kernel_clone","offset":0,"bpf_cookie":0,"pids":[{"pid":27777,"comm":"kprobe"}]},{"id":8,"type":"perf_event","prog_id":37,"tracepoint":"sched_switch","bpf_cookie":0,"pids":[{"pid":28036,"comm":"tracepoint"}]},{"id":9,"type":"perf_event","prog_id":43,"event_type":"software","event_config":"cpu-clock","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]},{"id":10,"type":"perf_event","prog_id":43,"event_type":"hw-cache","event_config":"LLC-load-misses","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]},{"id":11,"type":"perf_event","prog_id":43,"event_type":"hardware","event_config":"cpu-cycles","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]}]
For generic perf events, the displayed information in bpftool is limited to
the type and configuration, while other attributes such as sample_period,
sample_freq, etc., are not included.
The kernel function address won't be exposed if it is not permitted by
kptr_restrict. The result as follows when kptr_restrict is 2.
$ tools/bpf/bpftool/bpftool link show
4: perf_event prog 23
uprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
5: perf_event prog 24
uretprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
6: perf_event prog 31
kprobe kernel_clone
7: perf_event prog 30
kretprobe kernel_clone
8: perf_event prog 37
tracepoint sched_switch
9: perf_event prog 43
event software:cpu-clock
10: perf_event prog 43
event hw-cache:LLC-load-misses
11: perf_event prog 43
event hardware:cpu-cycles
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
tools/bpf/bpftool/link.c | 237 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 236 insertions(+), 1 deletion(-)
diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index e5aeee3..31bee95 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -17,6 +17,8 @@
#include "main.h"
#include "xlated_dumper.h"
+#define PERF_HW_CACHE_LEN 128
+
static struct hashmap *link_table;
static struct dump_data dd = {};
@@ -274,6 +276,110 @@ static int cmp_u64(const void *A, const void *B)
jsonw_end_array(json_wtr);
}
+static void
+show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
+{
+ jsonw_bool_field(wtr, "retprobe", info->perf_event.kprobe.flags & 0x1);
+ jsonw_uint_field(wtr, "addr", info->perf_event.kprobe.addr);
+ jsonw_string_field(wtr, "func",
+ u64_to_ptr(info->perf_event.kprobe.func_name));
+ jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset);
+}
+
+static void
+show_perf_event_uprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
+{
+ jsonw_bool_field(wtr, "retprobe", info->perf_event.uprobe.flags & 0x1);
+ jsonw_string_field(wtr, "file",
+ u64_to_ptr(info->perf_event.uprobe.file_name));
+ jsonw_uint_field(wtr, "offset", info->perf_event.uprobe.offset);
+}
+
+static void
+show_perf_event_tracepoint_json(struct bpf_link_info *info, json_writer_t *wtr)
+{
+ jsonw_string_field(wtr, "tracepoint",
+ u64_to_ptr(info->perf_event.tracepoint.tp_name));
+}
+
+static char *perf_config_hw_cache_str(__u64 config)
+{
+ const char *hw_cache, *result, *op;
+ char *str = malloc(PERF_HW_CACHE_LEN);
+
+ if (!str) {
+ p_err("mem alloc failed");
+ return NULL;
+ }
+
+ hw_cache = perf_event_name(evsel__hw_cache, config & 0xff);
+ if (hw_cache)
+ snprintf(str, PERF_HW_CACHE_LEN, "%s-", hw_cache);
+ else
+ snprintf(str, PERF_HW_CACHE_LEN, "%lld-", config & 0xff);
+
+ op = perf_event_name(evsel__hw_cache_op, (config >> 8) & 0xff);
+ if (op)
+ snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
+ "%s-", op);
+ else
+ snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
+ "%lld-", (config >> 8) & 0xff);
+
+ result = perf_event_name(evsel__hw_cache_result, config >> 16);
+ if (result)
+ snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
+ "%s", result);
+ else
+ snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
+ "%lld", config >> 16);
+ return str;
+}
+
+static const char *perf_config_str(__u32 type, __u64 config)
+{
+ const char *perf_config;
+
+ switch (type) {
+ case PERF_TYPE_HARDWARE:
+ perf_config = perf_event_name(event_symbols_hw, config);
+ break;
+ case PERF_TYPE_SOFTWARE:
+ perf_config = perf_event_name(event_symbols_sw, config);
+ break;
+ case PERF_TYPE_HW_CACHE:
+ perf_config = perf_config_hw_cache_str(config);
+ break;
+ default:
+ perf_config = NULL;
+ break;
+ }
+ return perf_config;
+}
+
+static void
+show_perf_event_event_json(struct bpf_link_info *info, json_writer_t *wtr)
+{
+ __u64 config = info->perf_event.event.config;
+ __u32 type = info->perf_event.event.type;
+ const char *perf_type, *perf_config;
+
+ perf_type = perf_event_name(perf_type_name, type);
+ if (perf_type)
+ jsonw_string_field(wtr, "event_type", perf_type);
+ else
+ jsonw_uint_field(wtr, "event_type", type);
+
+ perf_config = perf_config_str(type, config);
+ if (perf_config)
+ jsonw_string_field(wtr, "event_config", perf_config);
+ else
+ jsonw_uint_field(wtr, "event_config", config);
+
+ if (type == PERF_TYPE_HW_CACHE && perf_config)
+ free((void *)perf_config);
+}
+
static int show_link_close_json(int fd, struct bpf_link_info *info)
{
struct bpf_prog_info prog_info;
@@ -329,6 +435,24 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
case BPF_LINK_TYPE_KPROBE_MULTI:
show_kprobe_multi_json(info, json_wtr);
break;
+ case BPF_LINK_TYPE_PERF_EVENT:
+ switch (info->perf_event.type) {
+ case BPF_PERF_EVENT_EVENT:
+ show_perf_event_event_json(info, json_wtr);
+ break;
+ case BPF_PERF_EVENT_TRACEPOINT:
+ show_perf_event_tracepoint_json(info, json_wtr);
+ break;
+ case BPF_PERF_EVENT_KPROBE:
+ show_perf_event_kprobe_json(info, json_wtr);
+ break;
+ case BPF_PERF_EVENT_UPROBE:
+ show_perf_event_uprobe_json(info, json_wtr);
+ break;
+ default:
+ break;
+ }
+ break;
default:
break;
}
@@ -500,6 +624,75 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
}
}
+static void show_perf_event_kprobe_plain(struct bpf_link_info *info)
+{
+ const char *buf;
+
+ buf = (const char *)u64_to_ptr(info->perf_event.kprobe.func_name);
+ if (buf[0] == '\0' && !info->perf_event.kprobe.addr)
+ return;
+
+ if (info->perf_event.kprobe.flags & 0x1)
+ printf("\n\tkretprobe ");
+ else
+ printf("\n\tkprobe ");
+ if (info->perf_event.kprobe.addr)
+ printf("%llx ", info->perf_event.kprobe.addr);
+ printf("%s", buf);
+ if (info->perf_event.kprobe.offset)
+ printf("+%#x", info->perf_event.kprobe.offset);
+ printf(" ");
+}
+
+static void show_perf_event_uprobe_plain(struct bpf_link_info *info)
+{
+ const char *buf;
+
+ buf = (const char *)u64_to_ptr(info->perf_event.uprobe.file_name);
+ if (buf[0] == '\0')
+ return;
+
+ if (info->perf_event.uprobe.flags & 0x1)
+ printf("\n\turetprobe ");
+ else
+ printf("\n\tuprobe ");
+ printf("%s+%#x ", buf, info->perf_event.uprobe.offset);
+}
+
+static void show_perf_event_tracepoint_plain(struct bpf_link_info *info)
+{
+ const char *buf;
+
+ buf = (const char *)u64_to_ptr(info->perf_event.tracepoint.tp_name);
+ if (buf[0] == '\0')
+ return;
+
+ printf("\n\ttracepoint %s ", buf);
+}
+
+static void show_perf_event_event_plain(struct bpf_link_info *info)
+{
+ __u64 config = info->perf_event.event.config;
+ __u32 type = info->perf_event.event.type;
+ const char *perf_type, *perf_config;
+
+ printf("\n\tevent ");
+ perf_type = perf_event_name(perf_type_name, type);
+ if (perf_type)
+ printf("%s:", perf_type);
+ else
+ printf("%u :", type);
+
+ perf_config = perf_config_str(type, config);
+ if (perf_config)
+ printf("%s ", perf_config);
+ else
+ printf("%llu ", config);
+
+ if (type == PERF_TYPE_HW_CACHE && perf_config)
+ free((void *)perf_config);
+}
+
static int show_link_close_plain(int fd, struct bpf_link_info *info)
{
struct bpf_prog_info prog_info;
@@ -548,6 +741,24 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
case BPF_LINK_TYPE_KPROBE_MULTI:
show_kprobe_multi_plain(info);
break;
+ case BPF_LINK_TYPE_PERF_EVENT:
+ switch (info->perf_event.type) {
+ case BPF_PERF_EVENT_EVENT:
+ show_perf_event_event_plain(info);
+ break;
+ case BPF_PERF_EVENT_TRACEPOINT:
+ show_perf_event_tracepoint_plain(info);
+ break;
+ case BPF_PERF_EVENT_KPROBE:
+ show_perf_event_kprobe_plain(info);
+ break;
+ case BPF_PERF_EVENT_UPROBE:
+ show_perf_event_uprobe_plain(info);
+ break;
+ default:
+ break;
+ }
+ break;
default:
break;
}
@@ -570,11 +781,12 @@ static int do_show_link(int fd)
struct bpf_link_info info;
__u32 len = sizeof(info);
__u64 *addrs = NULL;
- char buf[256];
+ char buf[PATH_MAX];
int count;
int err;
memset(&info, 0, sizeof(info));
+ buf[0] = '\0';
again:
err = bpf_link_get_info_by_fd(fd, &info, &len);
if (err) {
@@ -609,7 +821,30 @@ static int do_show_link(int fd)
goto again;
}
}
+ if (info.type == BPF_LINK_TYPE_PERF_EVENT) {
+ if (info.perf_event.type == BPF_PERF_EVENT_EVENT)
+ goto out;
+ if (info.perf_event.type == BPF_PERF_EVENT_TRACEPOINT &&
+ !info.perf_event.tracepoint.tp_name) {
+ info.perf_event.tracepoint.tp_name = (unsigned long)&buf;
+ info.perf_event.tracepoint.name_len = sizeof(buf);
+ goto again;
+ }
+ if (info.perf_event.type == BPF_PERF_EVENT_KPROBE &&
+ !info.perf_event.kprobe.func_name) {
+ info.perf_event.kprobe.func_name = (unsigned long)&buf;
+ info.perf_event.kprobe.name_len = sizeof(buf);
+ goto again;
+ }
+ if (info.perf_event.type == BPF_PERF_EVENT_UPROBE &&
+ !info.perf_event.uprobe.file_name) {
+ info.perf_event.uprobe.file_name = (unsigned long)&buf;
+ info.perf_event.uprobe.name_len = sizeof(buf);
+ goto again;
+ }
+ }
+out:
if (json_output)
show_link_close_json(fd, &info);
else
--
1.8.3.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 11/11] bpftool: Show perf link info
2023-06-23 14:15 ` [PATCH v5 bpf-next 11/11] bpftool: Show perf link info Yafang Shao
@ 2023-06-23 16:49 ` Quentin Monnet
2023-06-25 14:31 ` Yafang Shao
2023-06-23 17:13 ` Alexei Starovoitov
1 sibling, 1 reply; 25+ messages in thread
From: Quentin Monnet @ 2023-06-23 16:49 UTC (permalink / raw)
To: Yafang Shao, ast, daniel, john.fastabend, andrii, martin.lau,
song, yhs, kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat
Cc: bpf, linux-trace-kernel
2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> Enhance bpftool to display comprehensive information about exposed
> perf_event links, covering uprobe, kprobe, tracepoint, and generic perf
> event. The resulting output will include the following details:
>
> $ tools/bpf/bpftool/bpftool link show
> 4: perf_event prog 23
> uprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> bpf_cookie 0
> pids uprobe(27503)
> 5: perf_event prog 24
> uretprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> bpf_cookie 0
> pids uprobe(27503)
> 6: perf_event prog 31
> kprobe ffffffffa90a9660 kernel_clone
> bpf_cookie 0
> pids kprobe(27777)
> 7: perf_event prog 30
> kretprobe ffffffffa90a9660 kernel_clone
> bpf_cookie 0
> pids kprobe(27777)
> 8: perf_event prog 37
> tracepoint sched_switch
> bpf_cookie 0
> pids tracepoint(28036)
> 9: perf_event prog 43
> event software:cpu-clock
> bpf_cookie 0
> pids perf_event(28261)
> 10: perf_event prog 43
> event hw-cache:LLC-load-misses
> bpf_cookie 0
> pids perf_event(28261)
> 11: perf_event prog 43
> event hardware:cpu-cycles
> bpf_cookie 0
> pids perf_event(28261)
>
> $ tools/bpf/bpftool/bpftool link show -j
> [{"id":4,"type":"perf_event","prog_id":23,"retprobe":false,"file":"/home/dev/waken/bpf/uprobe/a.out","offset":4920,"bpf_cookie":0,"pids":[{"pid":27503,"comm":"uprobe"}]},{"id":5,"type":"perf_event","prog_id":24,"retprobe":true,"file":"/home/dev/waken/bpf/uprobe/a.out","offset":4920,"bpf_cookie":0,"pids":[{"pid":27503,"comm":"uprobe"}]},{"id":6,"type":"perf_event","prog_id":31,"retprobe":false,"addr":18446744072250627680,"func":"kernel_clone","offset":0,"bpf_cookie":0,"pids":[{"pid":27777,"comm":"kprobe"}]},{"id":7,"type":"perf_event","prog_id":30,"retprobe":true,"addr":18446744072250627680,"func":"kernel_clone","offset":0,"bpf_cookie":0,"pids":[{"pid":27777,"comm":"kprobe"}]},{"id":8,"type":"perf_event","prog_id":37,"tracepoint":"sched_switch","bpf_cookie":0,"pids":[{"pid":28036,"comm":"tracepoint"}]},{"id":9,"type":"perf_event","prog_id":43,"event_type":"software","event_config":"cpu-clock","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]},{"id":10,"type":"perf_event","prog_id":43,"event_type":"hw-cache","event_config":"LLC-load-misses","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]},{"id":11,"type":"perf_event","prog_id":43,"event_type":"hardware","event_config":"cpu-cycles","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]}]
>
> For generic perf events, the displayed information in bpftool is limited to
> the type and configuration, while other attributes such as sample_period,
> sample_freq, etc., are not included.
>
> The kernel function address won't be exposed if it is not permitted by
> kptr_restrict. The result as follows when kptr_restrict is 2.
>
> $ tools/bpf/bpftool/bpftool link show
> 4: perf_event prog 23
> uprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> 5: perf_event prog 24
> uretprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> 6: perf_event prog 31
> kprobe kernel_clone
> 7: perf_event prog 30
> kretprobe kernel_clone
> 8: perf_event prog 37
> tracepoint sched_switch
> 9: perf_event prog 43
> event software:cpu-clock
> 10: perf_event prog 43
> event hw-cache:LLC-load-misses
> 11: perf_event prog 43
> event hardware:cpu-cycles
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
> tools/bpf/bpftool/link.c | 237 ++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 236 insertions(+), 1 deletion(-)
>
> diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
> index e5aeee3..31bee95 100644
> --- a/tools/bpf/bpftool/link.c
> +++ b/tools/bpf/bpftool/link.c
> @@ -17,6 +17,8 @@
> #include "main.h"
> #include "xlated_dumper.h"
>
> +#define PERF_HW_CACHE_LEN 128
> +
> static struct hashmap *link_table;
> static struct dump_data dd = {};
>
> @@ -274,6 +276,110 @@ static int cmp_u64(const void *A, const void *B)
> jsonw_end_array(json_wtr);
> }
>
> +static void
> +show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
> +{
> + jsonw_bool_field(wtr, "retprobe", info->perf_event.kprobe.flags & 0x1);
> + jsonw_uint_field(wtr, "addr", info->perf_event.kprobe.addr);
> + jsonw_string_field(wtr, "func",
> + u64_to_ptr(info->perf_event.kprobe.func_name));
> + jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset);
> +}
> +
> +static void
> +show_perf_event_uprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
> +{
> + jsonw_bool_field(wtr, "retprobe", info->perf_event.uprobe.flags & 0x1);
> + jsonw_string_field(wtr, "file",
> + u64_to_ptr(info->perf_event.uprobe.file_name));
> + jsonw_uint_field(wtr, "offset", info->perf_event.uprobe.offset);
> +}
> +
> +static void
> +show_perf_event_tracepoint_json(struct bpf_link_info *info, json_writer_t *wtr)
> +{
> + jsonw_string_field(wtr, "tracepoint",
> + u64_to_ptr(info->perf_event.tracepoint.tp_name));
> +}
> +
> +static char *perf_config_hw_cache_str(__u64 config)
> +{
> + const char *hw_cache, *result, *op;
> + char *str = malloc(PERF_HW_CACHE_LEN);
> +
> + if (!str) {
> + p_err("mem alloc failed");
> + return NULL;
> + }
> +
> + hw_cache = perf_event_name(evsel__hw_cache, config & 0xff);
> + if (hw_cache)
> + snprintf(str, PERF_HW_CACHE_LEN, "%s-", hw_cache);
> + else
> + snprintf(str, PERF_HW_CACHE_LEN, "%lld-", config & 0xff);
> +
> + op = perf_event_name(evsel__hw_cache_op, (config >> 8) & 0xff);
> + if (op)
> + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> + "%s-", op);
> + else
> + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> + "%lld-", (config >> 8) & 0xff);
> +
> + result = perf_event_name(evsel__hw_cache_result, config >> 16);
> + if (result)
> + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> + "%s", result);
> + else
> + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> + "%lld", config >> 16);
> + return str;
> +}
> +
> +static const char *perf_config_str(__u32 type, __u64 config)
> +{
> + const char *perf_config;
> +
> + switch (type) {
> + case PERF_TYPE_HARDWARE:
> + perf_config = perf_event_name(event_symbols_hw, config);
> + break;
> + case PERF_TYPE_SOFTWARE:
> + perf_config = perf_event_name(event_symbols_sw, config);
> + break;
> + case PERF_TYPE_HW_CACHE:
> + perf_config = perf_config_hw_cache_str(config);
> + break;
> + default:
> + perf_config = NULL;
> + break;
> + }
> + return perf_config;
> +}
> +
> +static void
> +show_perf_event_event_json(struct bpf_link_info *info, json_writer_t *wtr)
> +{
> + __u64 config = info->perf_event.event.config;
> + __u32 type = info->perf_event.event.type;
> + const char *perf_type, *perf_config;
> +
> + perf_type = perf_event_name(perf_type_name, type);
> + if (perf_type)
> + jsonw_string_field(wtr, "event_type", perf_type);
> + else
> + jsonw_uint_field(wtr, "event_type", type);
> +
> + perf_config = perf_config_str(type, config);
> + if (perf_config)
> + jsonw_string_field(wtr, "event_config", perf_config);
> + else
> + jsonw_uint_field(wtr, "event_config", config);
> +
> + if (type == PERF_TYPE_HW_CACHE && perf_config)
> + free((void *)perf_config);
> +}
> +
> static int show_link_close_json(int fd, struct bpf_link_info *info)
> {
> struct bpf_prog_info prog_info;
> @@ -329,6 +435,24 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
> case BPF_LINK_TYPE_KPROBE_MULTI:
> show_kprobe_multi_json(info, json_wtr);
> break;
> + case BPF_LINK_TYPE_PERF_EVENT:
> + switch (info->perf_event.type) {
> + case BPF_PERF_EVENT_EVENT:
> + show_perf_event_event_json(info, json_wtr);
> + break;
> + case BPF_PERF_EVENT_TRACEPOINT:
> + show_perf_event_tracepoint_json(info, json_wtr);
> + break;
> + case BPF_PERF_EVENT_KPROBE:
> + show_perf_event_kprobe_json(info, json_wtr);
> + break;
> + case BPF_PERF_EVENT_UPROBE:
> + show_perf_event_uprobe_json(info, json_wtr);
> + break;
> + default:
> + break;
> + }
> + break;
> default:
> break;
> }
> @@ -500,6 +624,75 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
> }
> }
>
> +static void show_perf_event_kprobe_plain(struct bpf_link_info *info)
> +{
> + const char *buf;
> +
> + buf = (const char *)u64_to_ptr(info->perf_event.kprobe.func_name);
> + if (buf[0] == '\0' && !info->perf_event.kprobe.addr)
> + return;
> +
> + if (info->perf_event.kprobe.flags & 0x1)
> + printf("\n\tkretprobe ");
> + else
> + printf("\n\tkprobe ");
> + if (info->perf_event.kprobe.addr)
> + printf("%llx ", info->perf_event.kprobe.addr);
> + printf("%s", buf);
> + if (info->perf_event.kprobe.offset)
> + printf("+%#x", info->perf_event.kprobe.offset);
> + printf(" ");
> +}
> +
> +static void show_perf_event_uprobe_plain(struct bpf_link_info *info)
> +{
> + const char *buf;
> +
> + buf = (const char *)u64_to_ptr(info->perf_event.uprobe.file_name);
> + if (buf[0] == '\0')
> + return;
> +
> + if (info->perf_event.uprobe.flags & 0x1)
> + printf("\n\turetprobe ");
> + else
> + printf("\n\tuprobe ");
> + printf("%s+%#x ", buf, info->perf_event.uprobe.offset);
> +}
> +
> +static void show_perf_event_tracepoint_plain(struct bpf_link_info *info)
> +{
> + const char *buf;
> +
> + buf = (const char *)u64_to_ptr(info->perf_event.tracepoint.tp_name);
> + if (buf[0] == '\0')
> + return;
> +
> + printf("\n\ttracepoint %s ", buf);
> +}
> +
> +static void show_perf_event_event_plain(struct bpf_link_info *info)
> +{
> + __u64 config = info->perf_event.event.config;
> + __u32 type = info->perf_event.event.type;
> + const char *perf_type, *perf_config;
> +
> + printf("\n\tevent ");
> + perf_type = perf_event_name(perf_type_name, type);
> + if (perf_type)
> + printf("%s:", perf_type);
> + else
> + printf("%u :", type);
> +
> + perf_config = perf_config_str(type, config);
> + if (perf_config)
> + printf("%s ", perf_config);
> + else
> + printf("%llu ", config);
> +
> + if (type == PERF_TYPE_HW_CACHE && perf_config)
> + free((void *)perf_config);
> +}
> +
> static int show_link_close_plain(int fd, struct bpf_link_info *info)
> {
> struct bpf_prog_info prog_info;
> @@ -548,6 +741,24 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
> case BPF_LINK_TYPE_KPROBE_MULTI:
> show_kprobe_multi_plain(info);
> break;
> + case BPF_LINK_TYPE_PERF_EVENT:
> + switch (info->perf_event.type) {
> + case BPF_PERF_EVENT_EVENT:
> + show_perf_event_event_plain(info);
> + break;
> + case BPF_PERF_EVENT_TRACEPOINT:
> + show_perf_event_tracepoint_plain(info);
> + break;
> + case BPF_PERF_EVENT_KPROBE:
> + show_perf_event_kprobe_plain(info);
> + break;
> + case BPF_PERF_EVENT_UPROBE:
> + show_perf_event_uprobe_plain(info);
> + break;
> + default:
> + break;
> + }
> + break;
> default:
> break;
> }
> @@ -570,11 +781,12 @@ static int do_show_link(int fd)
> struct bpf_link_info info;
> __u32 len = sizeof(info);
> __u64 *addrs = NULL;
> - char buf[256];
> + char buf[PATH_MAX];
> int count;
> int err;
>
> memset(&info, 0, sizeof(info));
> + buf[0] = '\0';
> again:
> err = bpf_link_get_info_by_fd(fd, &info, &len);
> if (err) {
> @@ -609,7 +821,30 @@ static int do_show_link(int fd)
> goto again;
> }
> }
> + if (info.type == BPF_LINK_TYPE_PERF_EVENT) {
> + if (info.perf_event.type == BPF_PERF_EVENT_EVENT)
> + goto out;
This "if (...) goto out;" seems unnecessary? If info.perf_event.type is
BPF_PERF_EVENT_EVENT we won't match any of the conditions below and
should reach the "out:" label anyway (and that label seems also
unnecessary)?
> + if (info.perf_event.type == BPF_PERF_EVENT_TRACEPOINT &&
> + !info.perf_event.tracepoint.tp_name) {
> + info.perf_event.tracepoint.tp_name = (unsigned long)&buf;
> + info.perf_event.tracepoint.name_len = sizeof(buf);
> + goto again;
> + }
> + if (info.perf_event.type == BPF_PERF_EVENT_KPROBE &&
> + !info.perf_event.kprobe.func_name) {
> + info.perf_event.kprobe.func_name = (unsigned long)&buf;
> + info.perf_event.kprobe.name_len = sizeof(buf);
> + goto again;
> + }
> + if (info.perf_event.type == BPF_PERF_EVENT_UPROBE &&
> + !info.perf_event.uprobe.file_name) {
> + info.perf_event.uprobe.file_name = (unsigned long)&buf;
> + info.perf_event.uprobe.name_len = sizeof(buf);
> + goto again;
> + }
> + }
>
> +out:
> if (json_output)
> show_link_close_json(fd, &info);
> else
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 11/11] bpftool: Show perf link info
2023-06-23 16:49 ` Quentin Monnet
@ 2023-06-25 14:31 ` Yafang Shao
0 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-25 14:31 UTC (permalink / raw)
To: Quentin Monnet
Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
kpsingh, sdf, haoluo, jolsa, rostedt, mhiramat, bpf,
linux-trace-kernel
On Sat, Jun 24, 2023 at 12:49 AM Quentin Monnet <quentin@isovalent.com> wrote:
>
> 2023-06-23 14:15 UTC+0000 ~ Yafang Shao <laoar.shao@gmail.com>
> > Enhance bpftool to display comprehensive information about exposed
> > perf_event links, covering uprobe, kprobe, tracepoint, and generic perf
> > event. The resulting output will include the following details:
> >
> > $ tools/bpf/bpftool/bpftool link show
> > 4: perf_event prog 23
> > uprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> > bpf_cookie 0
> > pids uprobe(27503)
> > 5: perf_event prog 24
> > uretprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> > bpf_cookie 0
> > pids uprobe(27503)
> > 6: perf_event prog 31
> > kprobe ffffffffa90a9660 kernel_clone
> > bpf_cookie 0
> > pids kprobe(27777)
> > 7: perf_event prog 30
> > kretprobe ffffffffa90a9660 kernel_clone
> > bpf_cookie 0
> > pids kprobe(27777)
> > 8: perf_event prog 37
> > tracepoint sched_switch
> > bpf_cookie 0
> > pids tracepoint(28036)
> > 9: perf_event prog 43
> > event software:cpu-clock
> > bpf_cookie 0
> > pids perf_event(28261)
> > 10: perf_event prog 43
> > event hw-cache:LLC-load-misses
> > bpf_cookie 0
> > pids perf_event(28261)
> > 11: perf_event prog 43
> > event hardware:cpu-cycles
> > bpf_cookie 0
> > pids perf_event(28261)
> >
> > $ tools/bpf/bpftool/bpftool link show -j
> > [{"id":4,"type":"perf_event","prog_id":23,"retprobe":false,"file":"/home/dev/waken/bpf/uprobe/a.out","offset":4920,"bpf_cookie":0,"pids":[{"pid":27503,"comm":"uprobe"}]},{"id":5,"type":"perf_event","prog_id":24,"retprobe":true,"file":"/home/dev/waken/bpf/uprobe/a.out","offset":4920,"bpf_cookie":0,"pids":[{"pid":27503,"comm":"uprobe"}]},{"id":6,"type":"perf_event","prog_id":31,"retprobe":false,"addr":18446744072250627680,"func":"kernel_clone","offset":0,"bpf_cookie":0,"pids":[{"pid":27777,"comm":"kprobe"}]},{"id":7,"type":"perf_event","prog_id":30,"retprobe":true,"addr":18446744072250627680,"func":"kernel_clone","offset":0,"bpf_cookie":0,"pids":[{"pid":27777,"comm":"kprobe"}]},{"id":8,"type":"perf_event","prog_id":37,"tracepoint":"sched_switch","bpf_cookie":0,"pids":[{"pid":28036,"comm":"tracepoint"}]},{"id":9,"type":"perf_event","prog_id":43,"event_type":"software","event_config":"cpu-clock","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]},{"id":10,"type":"perf_event","prog_id":43,"event_type":"hw-cache","event_config":"LLC-load-misses","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]},{"id":11,"type":"perf_event","prog_id":43,"event_type":"hardware","event_config":"cpu-cycles","bpf_cookie":0,"pids":[{"pid":28261,"comm":"perf_event"}]}]
> >
> > For generic perf events, the displayed information in bpftool is limited to
> > the type and configuration, while other attributes such as sample_period,
> > sample_freq, etc., are not included.
> >
> > The kernel function address won't be exposed if it is not permitted by
> > kptr_restrict. The result as follows when kptr_restrict is 2.
> >
> > $ tools/bpf/bpftool/bpftool link show
> > 4: perf_event prog 23
> > uprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> > 5: perf_event prog 24
> > uretprobe /home/dev/waken/bpf/uprobe/a.out+0x1338
> > 6: perf_event prog 31
> > kprobe kernel_clone
> > 7: perf_event prog 30
> > kretprobe kernel_clone
> > 8: perf_event prog 37
> > tracepoint sched_switch
> > 9: perf_event prog 43
> > event software:cpu-clock
> > 10: perf_event prog 43
> > event hw-cache:LLC-load-misses
> > 11: perf_event prog 43
> > event hardware:cpu-cycles
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> > tools/bpf/bpftool/link.c | 237 ++++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 236 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
> > index e5aeee3..31bee95 100644
> > --- a/tools/bpf/bpftool/link.c
> > +++ b/tools/bpf/bpftool/link.c
> > @@ -17,6 +17,8 @@
> > #include "main.h"
> > #include "xlated_dumper.h"
> >
> > +#define PERF_HW_CACHE_LEN 128
> > +
> > static struct hashmap *link_table;
> > static struct dump_data dd = {};
> >
> > @@ -274,6 +276,110 @@ static int cmp_u64(const void *A, const void *B)
> > jsonw_end_array(json_wtr);
> > }
> >
> > +static void
> > +show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
> > +{
> > + jsonw_bool_field(wtr, "retprobe", info->perf_event.kprobe.flags & 0x1);
> > + jsonw_uint_field(wtr, "addr", info->perf_event.kprobe.addr);
> > + jsonw_string_field(wtr, "func",
> > + u64_to_ptr(info->perf_event.kprobe.func_name));
> > + jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset);
> > +}
> > +
> > +static void
> > +show_perf_event_uprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
> > +{
> > + jsonw_bool_field(wtr, "retprobe", info->perf_event.uprobe.flags & 0x1);
> > + jsonw_string_field(wtr, "file",
> > + u64_to_ptr(info->perf_event.uprobe.file_name));
> > + jsonw_uint_field(wtr, "offset", info->perf_event.uprobe.offset);
> > +}
> > +
> > +static void
> > +show_perf_event_tracepoint_json(struct bpf_link_info *info, json_writer_t *wtr)
> > +{
> > + jsonw_string_field(wtr, "tracepoint",
> > + u64_to_ptr(info->perf_event.tracepoint.tp_name));
> > +}
> > +
> > +static char *perf_config_hw_cache_str(__u64 config)
> > +{
> > + const char *hw_cache, *result, *op;
> > + char *str = malloc(PERF_HW_CACHE_LEN);
> > +
> > + if (!str) {
> > + p_err("mem alloc failed");
> > + return NULL;
> > + }
> > +
> > + hw_cache = perf_event_name(evsel__hw_cache, config & 0xff);
> > + if (hw_cache)
> > + snprintf(str, PERF_HW_CACHE_LEN, "%s-", hw_cache);
> > + else
> > + snprintf(str, PERF_HW_CACHE_LEN, "%lld-", config & 0xff);
> > +
> > + op = perf_event_name(evsel__hw_cache_op, (config >> 8) & 0xff);
> > + if (op)
> > + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> > + "%s-", op);
> > + else
> > + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> > + "%lld-", (config >> 8) & 0xff);
> > +
> > + result = perf_event_name(evsel__hw_cache_result, config >> 16);
> > + if (result)
> > + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> > + "%s", result);
> > + else
> > + snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
> > + "%lld", config >> 16);
> > + return str;
> > +}
> > +
> > +static const char *perf_config_str(__u32 type, __u64 config)
> > +{
> > + const char *perf_config;
> > +
> > + switch (type) {
> > + case PERF_TYPE_HARDWARE:
> > + perf_config = perf_event_name(event_symbols_hw, config);
> > + break;
> > + case PERF_TYPE_SOFTWARE:
> > + perf_config = perf_event_name(event_symbols_sw, config);
> > + break;
> > + case PERF_TYPE_HW_CACHE:
> > + perf_config = perf_config_hw_cache_str(config);
> > + break;
> > + default:
> > + perf_config = NULL;
> > + break;
> > + }
> > + return perf_config;
> > +}
> > +
> > +static void
> > +show_perf_event_event_json(struct bpf_link_info *info, json_writer_t *wtr)
> > +{
> > + __u64 config = info->perf_event.event.config;
> > + __u32 type = info->perf_event.event.type;
> > + const char *perf_type, *perf_config;
> > +
> > + perf_type = perf_event_name(perf_type_name, type);
> > + if (perf_type)
> > + jsonw_string_field(wtr, "event_type", perf_type);
> > + else
> > + jsonw_uint_field(wtr, "event_type", type);
> > +
> > + perf_config = perf_config_str(type, config);
> > + if (perf_config)
> > + jsonw_string_field(wtr, "event_config", perf_config);
> > + else
> > + jsonw_uint_field(wtr, "event_config", config);
> > +
> > + if (type == PERF_TYPE_HW_CACHE && perf_config)
> > + free((void *)perf_config);
> > +}
> > +
> > static int show_link_close_json(int fd, struct bpf_link_info *info)
> > {
> > struct bpf_prog_info prog_info;
> > @@ -329,6 +435,24 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
> > case BPF_LINK_TYPE_KPROBE_MULTI:
> > show_kprobe_multi_json(info, json_wtr);
> > break;
> > + case BPF_LINK_TYPE_PERF_EVENT:
> > + switch (info->perf_event.type) {
> > + case BPF_PERF_EVENT_EVENT:
> > + show_perf_event_event_json(info, json_wtr);
> > + break;
> > + case BPF_PERF_EVENT_TRACEPOINT:
> > + show_perf_event_tracepoint_json(info, json_wtr);
> > + break;
> > + case BPF_PERF_EVENT_KPROBE:
> > + show_perf_event_kprobe_json(info, json_wtr);
> > + break;
> > + case BPF_PERF_EVENT_UPROBE:
> > + show_perf_event_uprobe_json(info, json_wtr);
> > + break;
> > + default:
> > + break;
> > + }
> > + break;
> > default:
> > break;
> > }
> > @@ -500,6 +624,75 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
> > }
> > }
> >
> > +static void show_perf_event_kprobe_plain(struct bpf_link_info *info)
> > +{
> > + const char *buf;
> > +
> > + buf = (const char *)u64_to_ptr(info->perf_event.kprobe.func_name);
> > + if (buf[0] == '\0' && !info->perf_event.kprobe.addr)
> > + return;
> > +
> > + if (info->perf_event.kprobe.flags & 0x1)
> > + printf("\n\tkretprobe ");
> > + else
> > + printf("\n\tkprobe ");
> > + if (info->perf_event.kprobe.addr)
> > + printf("%llx ", info->perf_event.kprobe.addr);
> > + printf("%s", buf);
> > + if (info->perf_event.kprobe.offset)
> > + printf("+%#x", info->perf_event.kprobe.offset);
> > + printf(" ");
> > +}
> > +
> > +static void show_perf_event_uprobe_plain(struct bpf_link_info *info)
> > +{
> > + const char *buf;
> > +
> > + buf = (const char *)u64_to_ptr(info->perf_event.uprobe.file_name);
> > + if (buf[0] == '\0')
> > + return;
> > +
> > + if (info->perf_event.uprobe.flags & 0x1)
> > + printf("\n\turetprobe ");
> > + else
> > + printf("\n\tuprobe ");
> > + printf("%s+%#x ", buf, info->perf_event.uprobe.offset);
> > +}
> > +
> > +static void show_perf_event_tracepoint_plain(struct bpf_link_info *info)
> > +{
> > + const char *buf;
> > +
> > + buf = (const char *)u64_to_ptr(info->perf_event.tracepoint.tp_name);
> > + if (buf[0] == '\0')
> > + return;
> > +
> > + printf("\n\ttracepoint %s ", buf);
> > +}
> > +
> > +static void show_perf_event_event_plain(struct bpf_link_info *info)
> > +{
> > + __u64 config = info->perf_event.event.config;
> > + __u32 type = info->perf_event.event.type;
> > + const char *perf_type, *perf_config;
> > +
> > + printf("\n\tevent ");
> > + perf_type = perf_event_name(perf_type_name, type);
> > + if (perf_type)
> > + printf("%s:", perf_type);
> > + else
> > + printf("%u :", type);
> > +
> > + perf_config = perf_config_str(type, config);
> > + if (perf_config)
> > + printf("%s ", perf_config);
> > + else
> > + printf("%llu ", config);
> > +
> > + if (type == PERF_TYPE_HW_CACHE && perf_config)
> > + free((void *)perf_config);
> > +}
> > +
> > static int show_link_close_plain(int fd, struct bpf_link_info *info)
> > {
> > struct bpf_prog_info prog_info;
> > @@ -548,6 +741,24 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
> > case BPF_LINK_TYPE_KPROBE_MULTI:
> > show_kprobe_multi_plain(info);
> > break;
> > + case BPF_LINK_TYPE_PERF_EVENT:
> > + switch (info->perf_event.type) {
> > + case BPF_PERF_EVENT_EVENT:
> > + show_perf_event_event_plain(info);
> > + break;
> > + case BPF_PERF_EVENT_TRACEPOINT:
> > + show_perf_event_tracepoint_plain(info);
> > + break;
> > + case BPF_PERF_EVENT_KPROBE:
> > + show_perf_event_kprobe_plain(info);
> > + break;
> > + case BPF_PERF_EVENT_UPROBE:
> > + show_perf_event_uprobe_plain(info);
> > + break;
> > + default:
> > + break;
> > + }
> > + break;
> > default:
> > break;
> > }
> > @@ -570,11 +781,12 @@ static int do_show_link(int fd)
> > struct bpf_link_info info;
> > __u32 len = sizeof(info);
> > __u64 *addrs = NULL;
> > - char buf[256];
> > + char buf[PATH_MAX];
> > int count;
> > int err;
> >
> > memset(&info, 0, sizeof(info));
> > + buf[0] = '\0';
> > again:
> > err = bpf_link_get_info_by_fd(fd, &info, &len);
> > if (err) {
> > @@ -609,7 +821,30 @@ static int do_show_link(int fd)
> > goto again;
> > }
> > }
> > + if (info.type == BPF_LINK_TYPE_PERF_EVENT) {
> > + if (info.perf_event.type == BPF_PERF_EVENT_EVENT)
> > + goto out;
>
> This "if (...) goto out;" seems unnecessary? If info.perf_event.type is
> BPF_PERF_EVENT_EVENT we won't match any of the conditions below and
> should reach the "out:" label anyway (and that label seems also
> unnecessary)?
Makes sense. Will change it.
--
Regards
Yafang
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 11/11] bpftool: Show perf link info
2023-06-23 14:15 ` [PATCH v5 bpf-next 11/11] bpftool: Show perf link info Yafang Shao
2023-06-23 16:49 ` Quentin Monnet
@ 2023-06-23 17:13 ` Alexei Starovoitov
2023-06-25 14:32 ` Yafang Shao
1 sibling, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2023-06-23 17:13 UTC (permalink / raw)
To: Yafang Shao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Quentin Monnet,
Steven Rostedt, Masami Hiramatsu, bpf, linux-trace-kernel
On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> pids uprobe(27503)
> 6: perf_event prog 31
...
> pids kprobe(27777)
> 7: perf_event prog 30
...
> pids kprobe(27777)
> 8: perf_event prog 37
> tracepoint sched_switch
> bpf_cookie 0
> pids tracepoint(28036)
uprobe/kprobe/tracepoint are really the names of your user space applications?
or something broke in bpftool that it doesn't show comm correctly?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v5 bpf-next 11/11] bpftool: Show perf link info
2023-06-23 17:13 ` Alexei Starovoitov
@ 2023-06-25 14:32 ` Yafang Shao
0 siblings, 0 replies; 25+ messages in thread
From: Yafang Shao @ 2023-06-25 14:32 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Quentin Monnet,
Steven Rostedt, Masami Hiramatsu, bpf, linux-trace-kernel
On Sat, Jun 24, 2023 at 1:14 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Jun 23, 2023 at 7:16 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> > pids uprobe(27503)
> > 6: perf_event prog 31
> ...
> > pids kprobe(27777)
> > 7: perf_event prog 30
> ...
> > pids kprobe(27777)
> > 8: perf_event prog 37
> > tracepoint sched_switch
> > bpf_cookie 0
> > pids tracepoint(28036)
>
> uprobe/kprobe/tracepoint are really the names of your user space applications?
Yes, they are my test applications. I just named them that way for an
easy test :)
>
> or something broke in bpftool that it doesn't show comm correctly?
--
Regards
Yafang
^ permalink raw reply [flat|nested] 25+ messages in thread