All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/2] Fix tail call counting with bpf2bpf
@ 2022-06-15 15:17 Jakub Sitnicki
  2022-06-15 15:17 ` [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call Jakub Sitnicki
  2022-06-15 15:17 ` [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack Jakub Sitnicki
  0 siblings, 2 replies; 7+ messages in thread
From: Jakub Sitnicki @ 2022-06-15 15:17 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Maciej Fijalkowski, kernel-team

When working on extending aarch64 JIT to support mixing bpf2bpf with tailcalls,
I ran into what looks like a bug in x64 JIT. Please see patch 1.  Patch 2 adds a
test so that we don't regress.

Jakub Sitnicki (2):
  bpf, x86: Fix tail call count offset calculation on bpf2bpf call
  selftests/bpf: Test tail call counting with bpf2bpf and data on stack

 arch/x86/net/bpf_jit_comp.c                   |  3 +-
 .../selftests/bpf/prog_tests/tailcalls.c      | 55 +++++++++++++++++++
 .../selftests/bpf/progs/tailcall_bpf2bpf6.c   | 42 ++++++++++++++
 3 files changed, 99 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c

-- 
2.35.3


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call
  2022-06-15 15:17 [PATCH bpf-next 0/2] Fix tail call counting with bpf2bpf Jakub Sitnicki
@ 2022-06-15 15:17 ` Jakub Sitnicki
  2022-06-16 14:45   ` Daniel Borkmann
  2022-06-15 15:17 ` [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack Jakub Sitnicki
  1 sibling, 1 reply; 7+ messages in thread
From: Jakub Sitnicki @ 2022-06-15 15:17 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Maciej Fijalkowski, kernel-team

On x86-64 the tail call count is passed from one BPF function to another
through %rax. Additionally, on function entry, the tail call count value is
stored on stack right after the BPF program stack, due to register
shortage.

The stored count is later loaded from stack either when performing a tail
call - to check if we have not reached the tail call limit - or before
calling another BPF function call in order to pass it via %rax.

In the latter case, we miscalculate the offset at which the tail call count
was stored on function entry. The JIT does not take into account that the
allocated BPF program stack is always a multiple of 8 on x86, while the
actual stack depth does not have to be.

This leads to a load from an offset that belongs to the BPF stack, as shown
in the example below:

SEC("tc")
int entry(struct __sk_buff *skb)
{
	/* Have data on stack which size is not a multiple of 8 */
	volatile char arr[1] = {};
	return subprog_tail(skb);
}

int entry(struct __sk_buff * skb):
   0: (b4) w2 = 0
   1: (73) *(u8 *)(r10 -1) = r2
   2: (85) call pc+1#bpf_prog_ce2f79bb5f3e06dd_F
   3: (95) exit

int entry(struct __sk_buff * skb):
   0xffffffffa0201788:  nop    DWORD PTR [rax+rax*1+0x0]
   0xffffffffa020178d:  xor    eax,eax
   0xffffffffa020178f:  push   rbp
   0xffffffffa0201790:  mov    rbp,rsp
   0xffffffffa0201793:  sub    rsp,0x8
   0xffffffffa020179a:  push   rax
   0xffffffffa020179b:  xor    esi,esi
   0xffffffffa020179d:  mov    BYTE PTR [rbp-0x1],sil
   0xffffffffa02017a1:  mov    rax,QWORD PTR [rbp-0x9]	!!! tail call count
   0xffffffffa02017a8:  call   0xffffffffa02017d8       !!! is at rbp-0x10
   0xffffffffa02017ad:  leave
   0xffffffffa02017ae:  ret

Fix it by rounding up the BPF stack depth to a multiple of 8, when
calculating the tail call count offset on stack.

Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 arch/x86/net/bpf_jit_comp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index f298b18a9a3d..c98b8c0ed3b8 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1420,8 +1420,9 @@ st:			if (is_imm8(insn->off))
 		case BPF_JMP | BPF_CALL:
 			func = (u8 *) __bpf_call_base + imm32;
 			if (tail_call_reachable) {
+				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
 				EMIT3_off32(0x48, 0x8B, 0x85,
-					    -(bpf_prog->aux->stack_depth + 8));
+					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);
 				if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
 					return -EINVAL;
 			} else {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack
  2022-06-15 15:17 [PATCH bpf-next 0/2] Fix tail call counting with bpf2bpf Jakub Sitnicki
  2022-06-15 15:17 ` [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call Jakub Sitnicki
@ 2022-06-15 15:17 ` Jakub Sitnicki
  2022-06-16 14:41   ` Daniel Borkmann
  1 sibling, 1 reply; 7+ messages in thread
From: Jakub Sitnicki @ 2022-06-15 15:17 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Maciej Fijalkowski, kernel-team

Cover the case when tail call count needs to be passed from BPF function to
BPF function, and the caller has data on stack. Specifically when the size
of data allocated on BPF stack is not a multiple on 8.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../selftests/bpf/prog_tests/tailcalls.c      | 55 +++++++++++++++++++
 .../selftests/bpf/progs/tailcall_bpf2bpf6.c   | 42 ++++++++++++++
 2 files changed, 97 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c

diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
index c4da87ec3ba4..19c70880cfb3 100644
--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
@@ -831,6 +831,59 @@ static void test_tailcall_bpf2bpf_4(bool noise)
 	bpf_object__close(obj);
 }
 
+#include "tailcall_bpf2bpf6.skel.h"
+
+/* Tail call counting works even when there is data on stack which is
+ * not aligned to 8 bytes.
+ */
+static void test_tailcall_bpf2bpf_6(void)
+{
+	struct tailcall_bpf2bpf6 *obj;
+	int err, map_fd, prog_fd, main_fd, data_fd, i, val;
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		.data_in = &pkt_v4,
+		.data_size_in = sizeof(pkt_v4),
+		.repeat = 1,
+	);
+
+	obj = tailcall_bpf2bpf6__open_and_load();
+	if (!ASSERT_OK_PTR(obj, "open and load"))
+		return;
+
+	main_fd = bpf_program__fd(obj->progs.entry);
+	if (!ASSERT_GE(main_fd, 0, "entry prog fd"))
+		goto out;
+
+	map_fd = bpf_map__fd(obj->maps.jmp_table);
+	if (!ASSERT_GE(map_fd, 0, "jmp_table map fd"))
+		goto out;
+
+	prog_fd = bpf_program__fd(obj->progs.classifier_0);
+	if (!ASSERT_GE(prog_fd, 0, "classifier_0 prog fd"))
+		goto out;
+
+	i = 0;
+	err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
+	if (!ASSERT_OK(err, "jmp_table map update"))
+		goto out;
+
+	err = bpf_prog_test_run_opts(main_fd, &topts);
+	ASSERT_OK(err, "entry prog test run");
+	ASSERT_EQ(topts.retval, 0, "tailcall retval");
+
+	data_fd = bpf_map__fd(obj->maps.bss);
+	if (!ASSERT_GE(map_fd, 0, "bss map fd"))
+		goto out;
+
+	i = 0;
+	err = bpf_map_lookup_elem(data_fd, &i, &val);
+	ASSERT_OK(err, "bss map lookup");
+	ASSERT_EQ(val, 1, "done flag is set");
+
+out:
+	tailcall_bpf2bpf6__destroy(obj);
+}
+
 void test_tailcalls(void)
 {
 	if (test__start_subtest("tailcall_1"))
@@ -855,4 +908,6 @@ void test_tailcalls(void)
 		test_tailcall_bpf2bpf_4(false);
 	if (test__start_subtest("tailcall_bpf2bpf_5"))
 		test_tailcall_bpf2bpf_4(true);
+	if (test__start_subtest("tailcall_bpf2bpf_6"))
+		test_tailcall_bpf2bpf_6();
 }
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
new file mode 100644
index 000000000000..256de9bcc621
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+#define __unused __attribute__((always_unused))
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 1);
+	__uint(key_size, sizeof(__u32));
+	__uint(value_size, sizeof(__u32));
+} jmp_table SEC(".maps");
+
+int done = 0;
+
+SEC("tc")
+int classifier_0(struct __sk_buff *skb __unused)
+{
+	done = 1;
+	return 0;
+}
+
+static __noinline
+int subprog_tail(struct __sk_buff *skb)
+{
+	/* Don't propagate the constant to the caller */
+	volatile int ret = 1;
+
+	bpf_tail_call_static(skb, &jmp_table, 0);
+	return ret;
+}
+
+SEC("tc")
+int entry(struct __sk_buff *skb)
+{
+	/* Have data on stack which size is not a multiple of 8 */
+	volatile char arr[1] = {};
+
+	return subprog_tail(skb);
+}
+
+char __license[] SEC("license") = "GPL";
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack
  2022-06-15 15:17 ` [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack Jakub Sitnicki
@ 2022-06-16 14:41   ` Daniel Borkmann
  2022-06-16 15:23     ` Jakub Sitnicki
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Borkmann @ 2022-06-16 14:41 UTC (permalink / raw)
  To: Jakub Sitnicki, bpf
  Cc: netdev, Alexei Starovoitov, Andrii Nakryiko, Maciej Fijalkowski,
	kernel-team

On 6/15/22 5:17 PM, Jakub Sitnicki wrote:
> Cover the case when tail call count needs to be passed from BPF function to
> BPF function, and the caller has data on stack. Specifically when the size
> of data allocated on BPF stack is not a multiple on 8.
> 
> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
> ---
>   .../selftests/bpf/prog_tests/tailcalls.c      | 55 +++++++++++++++++++
>   .../selftests/bpf/progs/tailcall_bpf2bpf6.c   | 42 ++++++++++++++
>   2 files changed, 97 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
> index c4da87ec3ba4..19c70880cfb3 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
> @@ -831,6 +831,59 @@ static void test_tailcall_bpf2bpf_4(bool noise)
>   	bpf_object__close(obj);
>   }
>   
> +#include "tailcall_bpf2bpf6.skel.h"
> +
> +/* Tail call counting works even when there is data on stack which is
> + * not aligned to 8 bytes.
> + */
> +static void test_tailcall_bpf2bpf_6(void)
> +{
> +	struct tailcall_bpf2bpf6 *obj;
> +	int err, map_fd, prog_fd, main_fd, data_fd, i, val;
> +	LIBBPF_OPTS(bpf_test_run_opts, topts,
> +		.data_in = &pkt_v4,
> +		.data_size_in = sizeof(pkt_v4),
> +		.repeat = 1,
> +	);
> +
> +	obj = tailcall_bpf2bpf6__open_and_load();
> +	if (!ASSERT_OK_PTR(obj, "open and load"))
> +		return;
> +
> +	main_fd = bpf_program__fd(obj->progs.entry);
> +	if (!ASSERT_GE(main_fd, 0, "entry prog fd"))
> +		goto out;
> +
> +	map_fd = bpf_map__fd(obj->maps.jmp_table);
> +	if (!ASSERT_GE(map_fd, 0, "jmp_table map fd"))
> +		goto out;
> +
> +	prog_fd = bpf_program__fd(obj->progs.classifier_0);
> +	if (!ASSERT_GE(prog_fd, 0, "classifier_0 prog fd"))
> +		goto out;
> +
> +	i = 0;
> +	err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
> +	if (!ASSERT_OK(err, "jmp_table map update"))
> +		goto out;
> +
> +	err = bpf_prog_test_run_opts(main_fd, &topts);
> +	ASSERT_OK(err, "entry prog test run");
> +	ASSERT_EQ(topts.retval, 0, "tailcall retval");
> +
> +	data_fd = bpf_map__fd(obj->maps.bss);
> +	if (!ASSERT_GE(map_fd, 0, "bss map fd"))
> +		goto out;
> +
> +	i = 0;
> +	err = bpf_map_lookup_elem(data_fd, &i, &val);
> +	ASSERT_OK(err, "bss map lookup");
> +	ASSERT_EQ(val, 1, "done flag is set");
> +
> +out:
> +	tailcall_bpf2bpf6__destroy(obj);
> +}
> +
>   void test_tailcalls(void)
>   {
>   	if (test__start_subtest("tailcall_1"))
> @@ -855,4 +908,6 @@ void test_tailcalls(void)
>   		test_tailcall_bpf2bpf_4(false);
>   	if (test__start_subtest("tailcall_bpf2bpf_5"))
>   		test_tailcall_bpf2bpf_4(true);
> +	if (test__start_subtest("tailcall_bpf2bpf_6"))
> +		test_tailcall_bpf2bpf_6();
>   }
> diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
> new file mode 100644
> index 000000000000..256de9bcc621
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
> @@ -0,0 +1,42 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/bpf.h>
> +#include <bpf/bpf_helpers.h>
> +
> +#define __unused __attribute__((always_unused))
> +
> +struct {
> +	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
> +	__uint(max_entries, 1);
> +	__uint(key_size, sizeof(__u32));
> +	__uint(value_size, sizeof(__u32));
> +} jmp_table SEC(".maps");
> +
> +int done = 0;
> +
> +SEC("tc")
> +int classifier_0(struct __sk_buff *skb __unused)
> +{
> +	done = 1;
> +	return 0;
> +}

Looks like this fails CI with:

   progs/tailcall_bpf2bpf6.c:17:40: error: unknown attribute 'always_unused' ignored [-Werror,-Wunknown-attributes]
   int classifier_0(struct __sk_buff *skb __unused)
                                          ^~~~~~~~
   progs/tailcall_bpf2bpf6.c:5:33: note: expanded from macro '__unused'
   #define __unused __attribute__((always_unused))
                                   ^~~~~~~~~~~~~
   1 error generated.
   make: *** [Makefile:509: /tmp/runner/work/bpf/bpf/tools/testing/selftests/bpf/tailcall_bpf2bpf6.o] Error 1
   make: *** Waiting for unfinished jobs....
   Error: Process completed with exit code 2.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call
  2022-06-15 15:17 ` [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call Jakub Sitnicki
@ 2022-06-16 14:45   ` Daniel Borkmann
  2022-06-16 15:01     ` Maciej Fijalkowski
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Borkmann @ 2022-06-16 14:45 UTC (permalink / raw)
  To: Jakub Sitnicki, bpf
  Cc: netdev, Alexei Starovoitov, Andrii Nakryiko, Maciej Fijalkowski,
	kernel-team

On 6/15/22 5:17 PM, Jakub Sitnicki wrote:
[...]
> int entry(struct __sk_buff * skb):
>     0xffffffffa0201788:  nop    DWORD PTR [rax+rax*1+0x0]
>     0xffffffffa020178d:  xor    eax,eax
>     0xffffffffa020178f:  push   rbp
>     0xffffffffa0201790:  mov    rbp,rsp
>     0xffffffffa0201793:  sub    rsp,0x8
>     0xffffffffa020179a:  push   rax
>     0xffffffffa020179b:  xor    esi,esi
>     0xffffffffa020179d:  mov    BYTE PTR [rbp-0x1],sil
>     0xffffffffa02017a1:  mov    rax,QWORD PTR [rbp-0x9]	!!! tail call count
>     0xffffffffa02017a8:  call   0xffffffffa02017d8       !!! is at rbp-0x10
>     0xffffffffa02017ad:  leave
>     0xffffffffa02017ae:  ret
> 
> Fix it by rounding up the BPF stack depth to a multiple of 8, when
> calculating the tail call count offset on stack.
> 
> Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
> ---
>   arch/x86/net/bpf_jit_comp.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index f298b18a9a3d..c98b8c0ed3b8 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1420,8 +1420,9 @@ st:			if (is_imm8(insn->off))
>   		case BPF_JMP | BPF_CALL:
>   			func = (u8 *) __bpf_call_base + imm32;
>   			if (tail_call_reachable) {
> +				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
>   				EMIT3_off32(0x48, 0x8B, 0x85,
> -					    -(bpf_prog->aux->stack_depth + 8));
> +					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);

Lgtm, great catch by the way!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call
  2022-06-16 14:45   ` Daniel Borkmann
@ 2022-06-16 15:01     ` Maciej Fijalkowski
  0 siblings, 0 replies; 7+ messages in thread
From: Maciej Fijalkowski @ 2022-06-16 15:01 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Jakub Sitnicki, bpf, netdev, Alexei Starovoitov, Andrii Nakryiko,
	kernel-team

On Thu, Jun 16, 2022 at 04:45:09PM +0200, Daniel Borkmann wrote:
> On 6/15/22 5:17 PM, Jakub Sitnicki wrote:
> [...]
> > int entry(struct __sk_buff * skb):
> >     0xffffffffa0201788:  nop    DWORD PTR [rax+rax*1+0x0]
> >     0xffffffffa020178d:  xor    eax,eax
> >     0xffffffffa020178f:  push   rbp
> >     0xffffffffa0201790:  mov    rbp,rsp
> >     0xffffffffa0201793:  sub    rsp,0x8
> >     0xffffffffa020179a:  push   rax
> >     0xffffffffa020179b:  xor    esi,esi
> >     0xffffffffa020179d:  mov    BYTE PTR [rbp-0x1],sil
> >     0xffffffffa02017a1:  mov    rax,QWORD PTR [rbp-0x9]	!!! tail call count
> >     0xffffffffa02017a8:  call   0xffffffffa02017d8       !!! is at rbp-0x10
> >     0xffffffffa02017ad:  leave
> >     0xffffffffa02017ae:  ret
> > 
> > Fix it by rounding up the BPF stack depth to a multiple of 8, when
> > calculating the tail call count offset on stack.
> > 
> > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
> > Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
> > ---
> >   arch/x86/net/bpf_jit_comp.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index f298b18a9a3d..c98b8c0ed3b8 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -1420,8 +1420,9 @@ st:			if (is_imm8(insn->off))
> >   		case BPF_JMP | BPF_CALL:
> >   			func = (u8 *) __bpf_call_base + imm32;
> >   			if (tail_call_reachable) {
> > +				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> >   				EMIT3_off32(0x48, 0x8B, 0x85,
> > -					    -(bpf_prog->aux->stack_depth + 8));
> > +					    -round_up(bpf_prog->aux->stack_depth, 8) - 8);
> 
> Lgtm, great catch by the way!

Indeed!

Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

I was wondering if it would be possible to work only on rounded up to 8
stack depth from JIT POV since it's what we do everywhere we use it...

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack
  2022-06-16 14:41   ` Daniel Borkmann
@ 2022-06-16 15:23     ` Jakub Sitnicki
  0 siblings, 0 replies; 7+ messages in thread
From: Jakub Sitnicki @ 2022-06-16 15:23 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: bpf, netdev, Alexei Starovoitov, Andrii Nakryiko,
	Maciej Fijalkowski, kernel-team

On Thu, Jun 16, 2022 at 04:41 PM +02, Daniel Borkmann wrote:

[...]

> Looks like this fails CI with:
>
>   progs/tailcall_bpf2bpf6.c:17:40: error: unknown attribute 'always_unused' ignored [-Werror,-Wunknown-attributes]
>   int classifier_0(struct __sk_buff *skb __unused)
>                                          ^~~~~~~~
>   progs/tailcall_bpf2bpf6.c:5:33: note: expanded from macro '__unused'
>   #define __unused __attribute__((always_unused))
>                                   ^~~~~~~~~~~~~
>   1 error generated.
>   make: *** [Makefile:509: /tmp/runner/work/bpf/bpf/tools/testing/selftests/bpf/tailcall_bpf2bpf6.o] Error 1
>   make: *** Waiting for unfinished jobs....
>   Error: Process completed with exit code 2.

I will switch to __attribute__((unused)) and ignore what checkpatch
says. Will respin. Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-06-16 15:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-15 15:17 [PATCH bpf-next 0/2] Fix tail call counting with bpf2bpf Jakub Sitnicki
2022-06-15 15:17 ` [PATCH bpf-next 1/2] bpf, x86: Fix tail call count offset calculation on bpf2bpf call Jakub Sitnicki
2022-06-16 14:45   ` Daniel Borkmann
2022-06-16 15:01     ` Maciej Fijalkowski
2022-06-15 15:17 ` [PATCH bpf-next 2/2] selftests/bpf: Test tail call counting with bpf2bpf and data on stack Jakub Sitnicki
2022-06-16 14:41   ` Daniel Borkmann
2022-06-16 15:23     ` Jakub Sitnicki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.