* [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED usage
@ 2019-11-06 20:15 Andrii Nakryiko
2019-11-06 21:18 ` Yonghong Song
0 siblings, 1 reply; 4+ messages in thread
From: Andrii Nakryiko @ 2019-11-06 20:15 UTC (permalink / raw)
To: bpf, netdev, ast, daniel
Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Yonghong Song
Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
return read result or 0, if underlying bpf_probe_read() failed.
In practice, real applications rarely check bpf_probe_read() result, because
it has to always work or otherwise it's a bug. So propagating internal
bpf_probe_read() error from this macro hurts usability without providing real
benefits in practice. This patch fixes the issue and simplifies usage,
noticeable even in selftest itself.
Cc: Yonghong Song <yhs@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/bpf_core_read.h | 27 ++++++++-----------
.../progs/test_core_reloc_bitfields_probed.c | 19 +++++--------
2 files changed, 18 insertions(+), 28 deletions(-)
diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index 11461b2623b0..7009dc90e012 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -39,32 +39,27 @@ enum bpf_field_info_kind {
#endif
/*
- * Extract bitfield, identified by src->field, and put its value into u64
- * *res. All this is done in relocatable manner, so bitfield changes such as
+ * Extract bitfield, identified by s->field, and return its value as u64.
+ * All this is done in relocatable manner, so bitfield changes such as
* signedness, bit size, offset changes, this will be handled automatically.
* This version of macro is using bpf_probe_read() to read underlying integer
* storage. Macro functions as an expression and its return type is
* bpf_probe_read()'s return value: 0, on success, <0 on error.
*/
-#define BPF_CORE_READ_BITFIELD_PROBED(src, field, res) ({ \
- unsigned long long val; \
+#define BPF_CORE_READ_BITFIELD_PROBED(s, field) ({ \
+ unsigned long long val = 0; \
\
- *res = 0; \
- val = __CORE_BITFIELD_PROBE_READ(res, src, field); \
- if (!val) { \
- *res <<= __CORE_RELO(src, field, LSHIFT_U64); \
- val = __CORE_RELO(src, field, RSHIFT_U64); \
- if (__CORE_RELO(src, field, SIGNED)) \
- *res = ((long long)*res) >> val; \
- else \
- *res = ((unsigned long long)*res) >> val; \
- val = 0; \
- } \
+ __CORE_BITFIELD_PROBE_READ(&val, s, field); \
+ val <<= __CORE_RELO(s, field, LSHIFT_U64); \
+ if (__CORE_RELO(s, field, SIGNED)) \
+ val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64); \
+ else \
+ val = val >> __CORE_RELO(s, field, RSHIFT_U64); \
val; \
})
/*
- * Extract bitfield, identified by src->field, and return its value as u64.
+ * Extract bitfield, identified by s->field, and return its value as u64.
* This version of macro is using direct memory reads and should be used from
* BPF program types that support such functionality (e.g., typed raw
* tracepoints).
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
index a381f8ac2419..e466e3ab7de4 100644
--- a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
@@ -37,11 +37,6 @@ struct core_reloc_bitfields_output {
int64_t s32;
};
-#define TRANSFER_BITFIELD(in, out, field) \
- if (BPF_CORE_READ_BITFIELD_PROBED(in, field, &res)) \
- return 1; \
- out->field = res
-
SEC("raw_tracepoint/sys_enter")
int test_core_bitfields(void *ctx)
{
@@ -49,13 +44,13 @@ int test_core_bitfields(void *ctx)
struct core_reloc_bitfields_output *out = (void *)&data.out;
uint64_t res;
- TRANSFER_BITFIELD(in, out, ub1);
- TRANSFER_BITFIELD(in, out, ub2);
- TRANSFER_BITFIELD(in, out, ub7);
- TRANSFER_BITFIELD(in, out, sb4);
- TRANSFER_BITFIELD(in, out, sb20);
- TRANSFER_BITFIELD(in, out, u32);
- TRANSFER_BITFIELD(in, out, s32);
+ out->ub1 = BPF_CORE_READ_BITFIELD_PROBED(in, ub1);
+ out->ub2 = BPF_CORE_READ_BITFIELD_PROBED(in, ub2);
+ out->ub7 = BPF_CORE_READ_BITFIELD_PROBED(in, ub7);
+ out->sb4 = BPF_CORE_READ_BITFIELD_PROBED(in, sb4);
+ out->sb20 = BPF_CORE_READ_BITFIELD_PROBED(in, sb20);
+ out->u32 = BPF_CORE_READ_BITFIELD_PROBED(in, u32);
+ out->s32 = BPF_CORE_READ_BITFIELD_PROBED(in, s32);
return 0;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED usage
2019-11-06 20:15 [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED usage Andrii Nakryiko
@ 2019-11-06 21:18 ` Yonghong Song
2019-11-06 21:58 ` Alexei Starovoitov
0 siblings, 1 reply; 4+ messages in thread
From: Yonghong Song @ 2019-11-06 21:18 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, netdev, Alexei Starovoitov, daniel
Cc: andrii.nakryiko, Kernel Team
On 11/6/19 12:15 PM, Andrii Nakryiko wrote:
> Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
> BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
> return read result or 0, if underlying bpf_probe_read() failed.
>
> In practice, real applications rarely check bpf_probe_read() result, because
> it has to always work or otherwise it's a bug. So propagating internal
> bpf_probe_read() error from this macro hurts usability without providing real
> benefits in practice. This patch fixes the issue and simplifies usage,
> noticeable even in selftest itself.
Agreed. This will be consistent with direct read where
returning value will be 0 if any fault happens.
In really rare cases, if user want to distinguish good value 0 from
bpf_probe_read() returning error, all building macros are in the header
file, user can have a custom solution. But let us have API work
for common use case with good usability.
>
> Cc: Yonghong Song <yhs@fb.com>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
> ---
> tools/lib/bpf/bpf_core_read.h | 27 ++++++++-----------
> .../progs/test_core_reloc_bitfields_probed.c | 19 +++++--------
> 2 files changed, 18 insertions(+), 28 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> index 11461b2623b0..7009dc90e012 100644
> --- a/tools/lib/bpf/bpf_core_read.h
> +++ b/tools/lib/bpf/bpf_core_read.h
> @@ -39,32 +39,27 @@ enum bpf_field_info_kind {
> #endif
>
> /*
> - * Extract bitfield, identified by src->field, and put its value into u64
> - * *res. All this is done in relocatable manner, so bitfield changes such as
> + * Extract bitfield, identified by s->field, and return its value as u64.
> + * All this is done in relocatable manner, so bitfield changes such as
> * signedness, bit size, offset changes, this will be handled automatically.
> * This version of macro is using bpf_probe_read() to read underlying integer
> * storage. Macro functions as an expression and its return type is
> * bpf_probe_read()'s return value: 0, on success, <0 on error.
> */
> -#define BPF_CORE_READ_BITFIELD_PROBED(src, field, res) ({ \
> - unsigned long long val; \
> +#define BPF_CORE_READ_BITFIELD_PROBED(s, field) ({ \
> + unsigned long long val = 0; \
> \
> - *res = 0; \
> - val = __CORE_BITFIELD_PROBE_READ(res, src, field); \
> - if (!val) { \
> - *res <<= __CORE_RELO(src, field, LSHIFT_U64); \
> - val = __CORE_RELO(src, field, RSHIFT_U64); \
> - if (__CORE_RELO(src, field, SIGNED)) \
> - *res = ((long long)*res) >> val; \
> - else \
> - *res = ((unsigned long long)*res) >> val; \
> - val = 0; \
> - } \
> + __CORE_BITFIELD_PROBE_READ(&val, s, field); \
> + val <<= __CORE_RELO(s, field, LSHIFT_U64); \
> + if (__CORE_RELO(s, field, SIGNED)) \
> + val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64); \
> + else \
> + val = val >> __CORE_RELO(s, field, RSHIFT_U64); \
> val; \
> })
>
> /*
> - * Extract bitfield, identified by src->field, and return its value as u64.
> + * Extract bitfield, identified by s->field, and return its value as u64.
> * This version of macro is using direct memory reads and should be used from
> * BPF program types that support such functionality (e.g., typed raw
> * tracepoints).
> diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
> index a381f8ac2419..e466e3ab7de4 100644
> --- a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
> +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
> @@ -37,11 +37,6 @@ struct core_reloc_bitfields_output {
> int64_t s32;
> };
>
> -#define TRANSFER_BITFIELD(in, out, field) \
> - if (BPF_CORE_READ_BITFIELD_PROBED(in, field, &res)) \
> - return 1; \
> - out->field = res
> -
> SEC("raw_tracepoint/sys_enter")
> int test_core_bitfields(void *ctx)
> {
> @@ -49,13 +44,13 @@ int test_core_bitfields(void *ctx)
> struct core_reloc_bitfields_output *out = (void *)&data.out;
> uint64_t res;
>
> - TRANSFER_BITFIELD(in, out, ub1);
> - TRANSFER_BITFIELD(in, out, ub2);
> - TRANSFER_BITFIELD(in, out, ub7);
> - TRANSFER_BITFIELD(in, out, sb4);
> - TRANSFER_BITFIELD(in, out, sb20);
> - TRANSFER_BITFIELD(in, out, u32);
> - TRANSFER_BITFIELD(in, out, s32);
> + out->ub1 = BPF_CORE_READ_BITFIELD_PROBED(in, ub1);
> + out->ub2 = BPF_CORE_READ_BITFIELD_PROBED(in, ub2);
> + out->ub7 = BPF_CORE_READ_BITFIELD_PROBED(in, ub7);
> + out->sb4 = BPF_CORE_READ_BITFIELD_PROBED(in, sb4);
> + out->sb20 = BPF_CORE_READ_BITFIELD_PROBED(in, sb20);
> + out->u32 = BPF_CORE_READ_BITFIELD_PROBED(in, u32);
> + out->s32 = BPF_CORE_READ_BITFIELD_PROBED(in, s32);
>
> return 0;
> }
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED usage
2019-11-06 21:18 ` Yonghong Song
@ 2019-11-06 21:58 ` Alexei Starovoitov
2019-11-06 22:02 ` Yonghong Song
0 siblings, 1 reply; 4+ messages in thread
From: Alexei Starovoitov @ 2019-11-06 21:58 UTC (permalink / raw)
To: Yonghong Song
Cc: Andrii Nakryiko, bpf, netdev, Alexei Starovoitov, daniel,
andrii.nakryiko, Kernel Team
On Wed, Nov 6, 2019 at 1:21 PM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 11/6/19 12:15 PM, Andrii Nakryiko wrote:
> > Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
> > BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
> > return read result or 0, if underlying bpf_probe_read() failed.
> >
> > In practice, real applications rarely check bpf_probe_read() result, because
> > it has to always work or otherwise it's a bug. So propagating internal
> > bpf_probe_read() error from this macro hurts usability without providing real
> > benefits in practice. This patch fixes the issue and simplifies usage,
> > noticeable even in selftest itself.
>
> Agreed. This will be consistent with direct read where
> returning value will be 0 if any fault happens.
>
> In really rare cases, if user want to distinguish good value 0 from
> bpf_probe_read() returning error, all building macros are in the header
> file, user can have a custom solution. But let us have API work
> for common use case with good usability.
>
> >
> > Cc: Yonghong Song <yhs@fb.com>
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>
> Acked-by: Yonghong Song <yhs@fb.com>
Applied. Thanks
Yonghong, please trim your replies.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED usage
2019-11-06 21:58 ` Alexei Starovoitov
@ 2019-11-06 22:02 ` Yonghong Song
0 siblings, 0 replies; 4+ messages in thread
From: Yonghong Song @ 2019-11-06 22:02 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Andrii Nakryiko, bpf, netdev, Alexei Starovoitov, daniel,
andrii.nakryiko, Kernel Team
On 11/6/19 1:58 PM, Alexei Starovoitov wrote:
> On Wed, Nov 6, 2019 at 1:21 PM Yonghong Song <yhs@fb.com> wrote:
>>
>>
>>
>> On 11/6/19 12:15 PM, Andrii Nakryiko wrote:
>>> Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
>>> BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
>>> return read result or 0, if underlying bpf_probe_read() failed.
>>>
>>> In practice, real applications rarely check bpf_probe_read() result, because
>>> it has to always work or otherwise it's a bug. So propagating internal
>>> bpf_probe_read() error from this macro hurts usability without providing real
>>> benefits in practice. This patch fixes the issue and simplifies usage,
>>> noticeable even in selftest itself.
>>
>> Agreed. This will be consistent with direct read where
>> returning value will be 0 if any fault happens.
>>
>> In really rare cases, if user want to distinguish good value 0 from
>> bpf_probe_read() returning error, all building macros are in the header
>> file, user can have a custom solution. But let us have API work
>> for common use case with good usability.
>>
>>>
>>> Cc: Yonghong Song <yhs@fb.com>
>>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>>
>> Acked-by: Yonghong Song <yhs@fb.com>
>
> Applied. Thanks
>
> Yonghong, please trim your replies.
Sorry, forgot to do. Will remember next time.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-11-06 22:02 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-06 20:15 [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED usage Andrii Nakryiko
2019-11-06 21:18 ` Yonghong Song
2019-11-06 21:58 ` Alexei Starovoitov
2019-11-06 22:02 ` Yonghong Song
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.