All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite
@ 2021-07-26  8:17 Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 01/14] bpf/tests: add BPF_JMP32 test cases Johan Almbladh
                   ` (14 more replies)
  0 siblings, 15 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Greetings,

During my work with the 32-bit MIPS JIT implementation I also added a
number of new test cases in the test_bpf kernel module. I found it
valuable to be able to throughly test the JIT on a low level with
minimum dependency on user space tooling. If you think it would be useful,
I have prepared a patch set with my additions. I have verified it on
x86_64 and i386, with/without JIT and JIT hardening. The interpreter
passes all tests. The JITs do too, with one exception, see NOTE below.
The result for the x86_64 JIT is summarized below.

    test_bpf: Summary: 577 PASSED, 0 FAILED, [565/565 JIT'ed]
    test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]

I have inserted the new tests in the location where related tests are run,
rather than putting them at the end. I have also tried to use the same
description style as the surrounding tests. Below is a summary of the
new tests.

* Operations not previously covered
  JMP32, ALU32 ARSH, remaining ATOMIC operations including
  XCHG and CMPXCHG.

* ALU operations with edge cases
  32-bit JITs implement ALU64 operations with two 32-bit registers per
  operand. Even "trivial" operations like bit shifts are non-trivial to
  implement. Test different input values that may trigger different JIT
  code paths. JITs may also implement BPF_K operations differently
  depending on if the immediate fits the corresponding field width of the
  native CPU instruction or not, so test that too.

* Word order in load/store
  The word order should follow endianness. Test that DW load/store
  operations result in the expected word order in memory.

* 32-bit eBPF argument zero extension
  On a 32-bit JIT the eBPF argument is a 32-bit pointer. If passed in
  a CPU register only one register in the mapped pair contains valid
  data. Verify that value is properly zero-extended.

* Long conditional jumps
  Test to trigger the relative-to-absolute branch conversion in MIPS JITs,
  when the PC-relative offset overflows the field width of the MIPS branch
  instruction.

* Tail calls
  A new test suite to test tail calls. Also test error paths and TCC
  limit.

NOTE: There is a minor discrepancy between the interpreter and the
(x86) JITs. With MAX_TAIL_CALL_CNT = 32, the interpreter seems to allow
up to 33 tail calls, whereas the JITs stop at 32. This causes the max TCC
test to fail for the JITs, since I used the interpreter as reference.
Either we change the interpreter behavior, change the JITs, or relax the
test to allow both behaviors.

Let me know what you think.

Cheers,
Johan

Johan Almbladh (14):
  bpf/tests: add BPF_JMP32 test cases
  bpf/tests: add BPF_MOV tests for zero and sign extension
  bpf/tests: fix typos in test case descriptions
  bpf/tests: add more tests of ALU32 and ALU64 bitwise operations
  bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH
  bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64
  bpf/tests: add more ALU64 BPF_MUL tests
  bpf/tests: add tests for ALU operations implemented with function
    calls
  bpf/tests: add word-order tests for load/store of double words
  bpf/tests: add branch conversion JIT test
  bpf/tests: add test for 32-bit context pointer argument passing
  bpf/tests: add tests for atomic operations
  bpf/tests: add tests for BPF_CMPXCHG
  bpf/tests: add tail call test suite

 lib/test_bpf.c | 2732 +++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 2475 insertions(+), 257 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [RFC PATCH 01/14] bpf/tests: add BPF_JMP32 test cases
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 02/14] bpf/tests: add BPF_MOV tests for zero and sign extension Johan Almbladh
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

An eBPF JIT may implement JMP32 operations in a different way than JMP,
especially on 32-bit architectures. This patch adds a series of tests
for JMP32 operations, mainly for testing JITs.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 511 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 511 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index f6d5d30d01bf..bfac033db590 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4398,6 +4398,517 @@ static struct bpf_test tests[] = {
 		{ { 0, 4134 } },
 		.fill_helper = bpf_fill_stxdw,
 	},
+	/* BPF_JMP32 | BPF_JEQ | BPF_K */
+	{
+		"JMP32_JEQ_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 321, 1),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JEQ_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 12345678),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 12345678 & 0xffff, 1),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 12345678, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 12345678 } }
+	},
+	{
+		"JMP32_JEQ_K: negative immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JEQ, R0,  123, 1),
+			BPF_JMP32_IMM(BPF_JEQ, R0, -123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	/* BPF_JMP32 | BPF_JEQ | BPF_X */
+	{
+		"JMP32_JEQ_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 1234),
+			BPF_ALU32_IMM(BPF_MOV, R1, 4321),
+			BPF_JMP32_REG(BPF_JEQ, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 1234),
+			BPF_JMP32_REG(BPF_JEQ, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1234 } }
+	},
+	/* BPF_JMP32 | BPF_JNE | BPF_K */
+	{
+		"JMP32_JNE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JNE, R0, 123, 1),
+			BPF_JMP32_IMM(BPF_JNE, R0, 321, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JNE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 12345678),
+			BPF_JMP32_IMM(BPF_JNE, R0, 12345678, 1),
+			BPF_JMP32_IMM(BPF_JNE, R0, 12345678 & 0xffff, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 12345678 } }
+	},
+	{
+		"JMP32_JNE_K: negative immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JNE, R0, -123, 1),
+			BPF_JMP32_IMM(BPF_JNE, R0,  123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	/* BPF_JMP32 | BPF_JNE | BPF_X */
+	{
+		"JMP32_JNE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 1234),
+			BPF_ALU32_IMM(BPF_MOV, R1, 1234),
+			BPF_JMP32_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 4321),
+			BPF_JMP32_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1234 } }
+	},
+	/* BPF_JMP32 | BPF_JSET | BPF_K */
+	{
+		"JMP32_JSET_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_JMP32_IMM(BPF_JSET, R0, 2, 1),
+			BPF_JMP32_IMM(BPF_JSET, R0, 3, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"JMP32_JSET_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x40000000),
+			BPF_JMP32_IMM(BPF_JSET, R0, 0x3fffffff, 1),
+			BPF_JMP32_IMM(BPF_JSET, R0, 0x60000000, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x40000000 } }
+	},
+	{
+		"JMP32_JSET_K: negative immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSET, R0, -1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	/* BPF_JMP32 | BPF_JSET | BPF_X */
+	{
+		"JMP32_JSET_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 8),
+			BPF_ALU32_IMM(BPF_MOV, R1, 7),
+			BPF_JMP32_REG(BPF_JSET, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 8 | 2),
+			BPF_JMP32_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 8 } }
+	},
+	/* BPF_JMP32 | BPF_JGT | BPF_K */
+	{
+		"JMP32_JGT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JGT, R0, 123, 1),
+			BPF_JMP32_IMM(BPF_JGT, R0, 122, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JGT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JGT, R0, 0xffffffff, 1),
+			BPF_JMP32_IMM(BPF_JGT, R0, 0xfffffffd, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JGT | BPF_X */
+	{
+		"JMP32_JGT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
+			BPF_JMP32_REG(BPF_JGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
+			BPF_JMP32_REG(BPF_JGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JGE | BPF_K */
+	{
+		"JMP32_JGE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JGE, R0, 124, 1),
+			BPF_JMP32_IMM(BPF_JGE, R0, 123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JGE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JGE, R0, 0xffffffff, 1),
+			BPF_JMP32_IMM(BPF_JGE, R0, 0xfffffffe, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JGE | BPF_X */
+	{
+		"JMP32_JGE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
+			BPF_JMP32_REG(BPF_JGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffe),
+			BPF_JMP32_REG(BPF_JGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLT | BPF_K */
+	{
+		"JMP32_JLT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JLT, R0, 123, 1),
+			BPF_JMP32_IMM(BPF_JLT, R0, 124, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JLT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JLT, R0, 0xfffffffd, 1),
+			BPF_JMP32_IMM(BPF_JLT, R0, 0xffffffff, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLT | BPF_X */
+	{
+		"JMP32_JLT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
+			BPF_JMP32_REG(BPF_JLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
+			BPF_JMP32_REG(BPF_JLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLE | BPF_K */
+	{
+		"JMP32_JLE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JLE, R0, 122, 1),
+			BPF_JMP32_IMM(BPF_JLE, R0, 123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JLE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JLE, R0, 0xfffffffd, 1),
+			BPF_JMP32_IMM(BPF_JLE, R0, 0xfffffffe, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLE | BPF_X */
+	{
+		"JMP32_JLE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
+			BPF_JMP32_REG(BPF_JLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffe),
+			BPF_JMP32_REG(BPF_JLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JSGT | BPF_K */
+	{
+		"JMP32_JSGT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -123, 1),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -124, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSGT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -12345678, 1),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -12345679, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSGT | BPF_X */
+	{
+		"JMP32_JSGT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345679),
+			BPF_JMP32_REG(BPF_JSGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSGE | BPF_K */
+	{
+		"JMP32_JSGE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -122, 1),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSGE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -12345677, 1),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -12345678, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSGE | BPF_X */
+	{
+		"JMP32_JSGE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345677),
+			BPF_JMP32_REG(BPF_JSGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLT | BPF_K */
+	{
+		"JMP32_JSLT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -123, 1),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -122, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSLT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -12345678, 1),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -12345677, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLT | BPF_X */
+	{
+		"JMP32_JSLT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345677),
+			BPF_JMP32_REG(BPF_JSLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLE | BPF_K */
+	{
+		"JMP32_JSLE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -124, 1),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSLE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -12345679, 1),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -12345678, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLE | BPF_K */
+	{
+		"JMP32_JSLE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345679),
+			BPF_JMP32_REG(BPF_JSLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
 	/* BPF_JMP | BPF_EXIT */
 	{
 		"JMP_EXIT",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 02/14] bpf/tests: add BPF_MOV tests for zero and sign extension
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 01/14] bpf/tests: add BPF_JMP32 test cases Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 03/14] bpf/tests: fix typos in test case descriptions Johan Almbladh
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Tests for ALU32 and ALU64 MOV with different sizes of the immediate
value. Depending on the immediate field width of the native CPU
instructions, a JIT may generate code differently depending on the
immediate value. Test that zero or sign extension is performed as
expected. Mainly for JIT testing.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 84 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index bfac033db590..9e232acddce8 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -2360,6 +2360,48 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU_MOV_K: small negative",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"ALU_MOV_K: small negative zero extension",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU_MOV_K: large negative",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123456789),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123456789 } }
+	},
+	{
+		"ALU_MOV_K: large negative zero extension",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123456789),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
 	{
 		"ALU64_MOV_K: dst = 2",
 		.u.insns_int = {
@@ -2412,6 +2454,48 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_MOV_K: small negative",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"ALU64_MOV_K: small negative sign extension",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xffffffff } }
+	},
+	{
+		"ALU64_MOV_K: large negative",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123456789),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123456789 } }
+	},
+	{
+		"ALU64_MOV_K: large negative sign extension",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123456789),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xffffffff } }
+	},
 	/* BPF_ALU | BPF_ADD | BPF_X */
 	{
 		"ALU_ADD_X: 1 + 2 = 3",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 03/14] bpf/tests: fix typos in test case descriptions
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 01/14] bpf/tests: add BPF_JMP32 test cases Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 02/14] bpf/tests: add BPF_MOV tests for zero and sign extension Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 04/14] bpf/tests: add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch corrects the test description in a number of cases where
the description differed from what was actually tested and expected.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 9e232acddce8..9695d13812df 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -3537,7 +3537,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffffffff } },
 	},
 	{
-		"ALU64_AND_K: 0x0000ffffffff0000 & 0x0 = 0x0000ffff00000000",
+		"ALU64_AND_K: 0x0000ffffffff0000 & 0x0 = 0x0000000000000000",
 		.u.insns_int = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000000000000000LL),
@@ -3553,7 +3553,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 0x1 } },
 	},
 	{
-		"ALU64_AND_K: 0x0000ffffffff0000 & -1 = 0x0000ffffffffffff",
+		"ALU64_AND_K: 0x0000ffffffff0000 & -1 = 0x0000ffffffff0000",
 		.u.insns_int = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
@@ -3679,7 +3679,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffffffff } },
 	},
 	{
-		"ALU64_OR_K: 0x0000ffffffff0000 | 0x0 = 0x0000ffff00000000",
+		"ALU64_OR_K: 0x0000ffffffff0000 | 0x0 = 0x0000ffffffff0000",
 		.u.insns_int = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
@@ -3810,7 +3810,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 3 } },
 	},
 	{
-		"ALU64_XOR_K: 1 & 0xffffffff = 0xfffffffe",
+		"ALU64_XOR_K: 1 ^ 0xffffffff = 0xfffffffe",
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 1),
 			BPF_ALU64_IMM(BPF_XOR, R0, 0xffffffff),
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 04/14] bpf/tests: add more tests of ALU32 and ALU64 bitwise operations
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (2 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 03/14] bpf/tests: fix typos in test case descriptions Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 05/14] bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds tests of BPF_AND, BPF_OR and BPF_XOR with different
magnitude of the immediate value. Mainly checking 32-bit JIT sub-word
handling and zero/sign extension.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 210 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 210 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 9695d13812df..67e7de776c12 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -3514,6 +3514,44 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } },
 	},
+	{
+		"ALU_AND_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_AND, R0, 15),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 4 } }
+	},
+	{
+		"ALU_AND_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xf1f2f3f4),
+			BPF_ALU32_IMM(BPF_AND, R0, 0xafbfcfdf),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xa1b2c3d4 } }
+	},
+	{
+		"ALU_AND_K: Zero extension",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x0000000080a0c0e0LL),
+			BPF_ALU32_IMM(BPF_AND, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	{
 		"ALU64_AND_K: 3 & 2 = 2",
 		.u.insns_int = {
@@ -3584,6 +3622,38 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_AND_K: Sign extension 1",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x00000000090b0d0fLL),
+			BPF_ALU64_IMM(BPF_AND, R0, 0x0f0f0f0f),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"ALU64_AND_K: Sign extension 2",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x0123456780a0c0e0LL),
+			BPF_ALU64_IMM(BPF_AND, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	/* BPF_ALU | BPF_OR | BPF_X */
 	{
 		"ALU_OR_X: 1 | 2 = 3",
@@ -3656,6 +3726,44 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } },
 	},
+	{
+		"ALU_OR_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_OR, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01020305 } }
+	},
+	{
+		"ALU_OR_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_OR, R0, 0xa0b0c0d0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xa1b2c3d4 } }
+	},
+	{
+		"ALU_OR_K: Zero extension",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x00000000f9fbfdffLL),
+			BPF_ALU32_IMM(BPF_OR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	{
 		"ALU64_OR_K: 1 | 2 = 3",
 		.u.insns_int = {
@@ -3726,6 +3834,38 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_OR_K: Sign extension 1",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x012345678fafcfefLL),
+			BPF_ALU64_IMM(BPF_OR, R0, 0x0f0f0f0f),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"ALU64_OR_K: Sign extension 2",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0xfffffffff9fbfdffLL),
+			BPF_ALU64_IMM(BPF_OR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	/* BPF_ALU | BPF_XOR | BPF_X */
 	{
 		"ALU_XOR_X: 5 ^ 6 = 3",
@@ -3798,6 +3938,44 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xfffffffe } },
 	},
+	{
+		"ALU_XOR_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_XOR, R0, 15),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x0102030b } }
+	},
+	{
+		"ALU_XOR_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xf1f2f3f4),
+			BPF_ALU32_IMM(BPF_XOR, R0, 0xafbfcfdf),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x5e4d3c2b } }
+	},
+	{
+		"ALU_XOR_K: Zero extension",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x00000000795b3d1fLL),
+			BPF_ALU32_IMM(BPF_XOR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	{
 		"ALU64_XOR_K: 5 ^ 6 = 3",
 		.u.insns_int = {
@@ -3868,6 +4046,38 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_XOR_K: Sign extension 1",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x0123456786a4c2e0LL),
+			BPF_ALU64_IMM(BPF_XOR, R0, 0x0f0f0f0f),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"ALU64_XOR_K: Sign extension 2",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0xfedcba98795b3d1fLL),
+			BPF_ALU64_IMM(BPF_XOR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	/* BPF_ALU | BPF_LSH | BPF_X */
 	{
 		"ALU_LSH_X: 1 << 1 = 2",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 05/14] bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (3 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 04/14] bpf/tests: add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 06/14] bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds more tests of ALU32 shift operations BPF_LSH and BPF_RSH,
including the special case of a zero immediate. Also add corresponding
BPF_ARSH tests which were missing for ALU32.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 102 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 67e7de776c12..ef75dbf53ec2 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4103,6 +4103,18 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU_LSH_X: 0x12345678 << 12 = 0x45678000",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU32_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x45678000 } }
+	},
 	{
 		"ALU64_LSH_X: 1 << 1 = 2",
 		.u.insns_int = {
@@ -4150,6 +4162,28 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU_LSH_K: 0x12345678 << 12 = 0x45678000",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_LSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x45678000 } }
+	},
+	{
+		"ALU_LSH_K: 0x12345678 << 0 = 0x12345678",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_LSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x12345678 } }
+	},
 	{
 		"ALU64_LSH_K: 1 << 1 = 2",
 		.u.insns_int = {
@@ -4197,6 +4231,18 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU_RSH_X: 0x12345678 >> 20 = 0x123",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, 20),
+			BPF_ALU32_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x123 } }
+	},
 	{
 		"ALU64_RSH_X: 2 >> 1 = 1",
 		.u.insns_int = {
@@ -4244,6 +4290,28 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU_RSH_K: 0x12345678 >> 20 = 0x123",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_RSH, R0, 20),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x123 } }
+	},
+	{
+		"ALU_RSH_K: 0x12345678 >> 0 = 0x12345678",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_RSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x12345678 } }
+	},
 	{
 		"ALU64_RSH_K: 2 >> 1 = 1",
 		.u.insns_int = {
@@ -4267,6 +4335,18 @@ static struct bpf_test tests[] = {
 		{ { 0, 1 } },
 	},
 	/* BPF_ALU | BPF_ARSH | BPF_X */
+	{
+		"ALU32_ARSH_X: -1234 >> 7 = -10",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -1234),
+			BPF_ALU32_IMM(BPF_MOV, R1, 7),
+			BPF_ALU32_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -10 } }
+	},
 	{
 		"ALU_ARSH_X: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
@@ -4280,6 +4360,28 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffff00ff } },
 	},
 	/* BPF_ALU | BPF_ARSH | BPF_K */
+	{
+		"ALU32_ARSH_K: -1234 >> 7 = -10",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -1234),
+			BPF_ALU32_IMM(BPF_ARSH, R0, 7),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -10 } }
+	},
+	{
+		"ALU32_ARSH_K: -1234 >> 0 = -1234",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -1234),
+			BPF_ALU32_IMM(BPF_ARSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1234 } }
+	},
 	{
 		"ALU_ARSH_K: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 06/14] bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (4 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 05/14] bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 07/14] bpf/tests: add more ALU64 BPF_MUL tests Johan Almbladh
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds a number of tests for BPF_LSH, BPF_RSH amd BPF_ARSH
ALU64 operations with values that may trigger different JIT code paths.
Mainly testing 32-bit JITs that implement ALU64 operations with two
32-bit CPU registers per operand.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 544 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 542 insertions(+), 2 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ef75dbf53ec2..b930fa35b9ef 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4139,6 +4139,106 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU64_LSH_X: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xbcdef000 } }
+	},
+	{
+		"ALU64_LSH_X: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x3456789a } }
+	},
+	{
+		"ALU64_LSH_X: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_X: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x9abcdef0 } }
+	},
+	{
+		"ALU64_LSH_X: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_X: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_LSH_X: Zero shift, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_LSH_X: Zero shift, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } }
+	},
 	/* BPF_ALU | BPF_LSH | BPF_K */
 	{
 		"ALU_LSH_K: 1 << 1 = 2",
@@ -4206,6 +4306,86 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU64_LSH_K: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xbcdef000 } }
+	},
+	{
+		"ALU64_LSH_K: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 12),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x3456789a } }
+	},
+	{
+		"ALU64_LSH_K: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_K: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 36),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x9abcdef0 } }
+	},
+	{
+		"ALU64_LSH_K: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_K: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 32),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_LSH_K: Zero shift",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
 	/* BPF_ALU | BPF_RSH | BPF_X */
 	{
 		"ALU_RSH_X: 2 >> 1 = 1",
@@ -4267,6 +4447,106 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU64_RSH_X: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_RSH_X: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x00081234 } }
+	},
+	{
+		"ALU64_RSH_X: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x08123456 } }
+	},
+	{
+		"ALU64_RSH_X: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_X: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_RSH_X: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_X: Zero shift, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_RSH_X: Zero shift, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
 	/* BPF_ALU | BPF_RSH | BPF_K */
 	{
 		"ALU_RSH_K: 2 >> 1 = 1",
@@ -4334,6 +4614,86 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU64_RSH_K: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_RSH_K: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 12),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x00081234 } }
+	},
+	{
+		"ALU64_RSH_K: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x08123456 } }
+	},
+	{
+		"ALU64_RSH_K: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 36),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_K: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_RSH_K: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_K: Zero shift",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
 	/* BPF_ALU | BPF_ARSH | BPF_X */
 	{
 		"ALU32_ARSH_X: -1234 >> 7 = -10",
@@ -4348,7 +4708,7 @@ static struct bpf_test tests[] = {
 		{ { 0, -10 } }
 	},
 	{
-		"ALU_ARSH_X: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
+		"ALU64_ARSH_X: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 0xff00ff0000000000LL),
 			BPF_ALU32_IMM(BPF_MOV, R1, 40),
@@ -4359,6 +4719,106 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffff00ff } },
 	},
+	{
+		"ALU64_ARSH_X: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_ARSH_X: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfff81234 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xf8123456 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_X: Zero shift, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_ARSH_X: Zero shift, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
 	/* BPF_ALU | BPF_ARSH | BPF_K */
 	{
 		"ALU32_ARSH_K: -1234 >> 7 = -10",
@@ -4383,7 +4843,7 @@ static struct bpf_test tests[] = {
 		{ { 0, -1234 } }
 	},
 	{
-		"ALU_ARSH_K: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
+		"ALU64_ARSH_K: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 0xff00ff0000000000LL),
 			BPF_ALU64_IMM(BPF_ARSH, R0, 40),
@@ -4393,6 +4853,86 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffff00ff } },
 	},
+	{
+		"ALU64_ARSH_K: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_ARSH_K: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 12),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfff81234 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xf8123456 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0xf123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 36),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 32),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_K: Zero shoft",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
 	/* BPF_ALU | BPF_NEG */
 	{
 		"ALU_NEG: -(3) = -3",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 07/14] bpf/tests: add more ALU64 BPF_MUL tests
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (5 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 06/14] bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 08/14] bpf/tests: add tests for ALU operations implemented with function calls Johan Almbladh
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds BPF_MUL tests for 64x32 and 64x64 multiply. Mainly
testing 32-bit JITs that implement ALU64 operations with two 32-bit
CPU registers per operand.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index b930fa35b9ef..eb61088a674f 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -3051,6 +3051,31 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 2147483647 } },
 	},
+	{
+		"ALU64_MUL_X: 64x64 multiply, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
+			BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
+			BPF_ALU64_REG(BPF_MUL, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xe5618cf0 } }
+	},
+	{
+		"ALU64_MUL_X: 64x64 multiply, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
+			BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
+			BPF_ALU64_REG(BPF_MUL, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x2236d88f } }
+	},
 	/* BPF_ALU | BPF_MUL | BPF_K */
 	{
 		"ALU_MUL_K: 2 * 3 = 6",
@@ -3161,6 +3186,29 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_MUL_K: 64x32 multiply, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_MUL, R0, 0x12345678),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xe242d208 } }
+	},
+	{
+		"ALU64_MUL_K: 64x32 multiply, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_MUL, R0, 0x12345678),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xc28f5c28 } }
+	},
 	/* BPF_ALU | BPF_DIV | BPF_X */
 	{
 		"ALU_DIV_X: 6 / 2 = 3",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 08/14] bpf/tests: add tests for ALU operations implemented with function calls
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (6 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 07/14] bpf/tests: add more ALU64 BPF_MUL tests Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 09/14] bpf/tests: add word-order tests for load/store of double words Johan Almbladh
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

32-bit JITs may implement complex ALU64 instructions using function calls.
The new tests check aspects related to this, such as register clobbering
and register argument re-ordering.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 138 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 138 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index eb61088a674f..1115e39630ce 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -1916,6 +1916,144 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, -1 } }
 	},
+	{
+		/*
+		 * Register (non-)clobbering test, in the case where a 32-bit
+		 * JIT implements complex ALU64 operations via function calls.
+		 */
+		"INT: Register clobbering, R1 updated",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_ALU32_IMM(BPF_MOV, R1, 123456789),
+			BPF_ALU32_IMM(BPF_MOV, R2, 2),
+			BPF_ALU32_IMM(BPF_MOV, R3, 3),
+			BPF_ALU32_IMM(BPF_MOV, R4, 4),
+			BPF_ALU32_IMM(BPF_MOV, R5, 5),
+			BPF_ALU32_IMM(BPF_MOV, R6, 6),
+			BPF_ALU32_IMM(BPF_MOV, R7, 7),
+			BPF_ALU32_IMM(BPF_MOV, R8, 8),
+			BPF_ALU32_IMM(BPF_MOV, R9, 9),
+			BPF_ALU64_IMM(BPF_DIV, R1, 123456789),
+			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
+			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
+			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
+			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
+			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
+			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
+			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
+			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
+			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
+			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"INT: Register clobbering, R2 updated",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_ALU32_IMM(BPF_MOV, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789),
+			BPF_ALU32_IMM(BPF_MOV, R3, 3),
+			BPF_ALU32_IMM(BPF_MOV, R4, 4),
+			BPF_ALU32_IMM(BPF_MOV, R5, 5),
+			BPF_ALU32_IMM(BPF_MOV, R6, 6),
+			BPF_ALU32_IMM(BPF_MOV, R7, 7),
+			BPF_ALU32_IMM(BPF_MOV, R8, 8),
+			BPF_ALU32_IMM(BPF_MOV, R9, 9),
+			BPF_ALU64_IMM(BPF_DIV, R2, 123456789),
+			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
+			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
+			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
+			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
+			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
+			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
+			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
+			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
+			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
+			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		/*
+		 * Test 32-bit JITs that implement complex ALU64 operations as
+		 * function calls R0 = f(R1, R2), and must re-arrange operands.
+		 */
+#define NUMER 0xfedcba9876543210ULL
+#define DENOM 0x0123456789abcdefULL
+		"ALU64_DIV X: Operand register permutations",
+		.u.insns_int = {
+			/* R0 / R2 */
+			BPF_LD_IMM64(R0, NUMER),
+			BPF_LD_IMM64(R2, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R0, R2),
+			BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R1 / R0 */
+			BPF_LD_IMM64(R1, NUMER),
+			BPF_LD_IMM64(R0, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R1, R0),
+			BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R0 / R1 */
+			BPF_LD_IMM64(R0, NUMER),
+			BPF_LD_IMM64(R1, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R0, R1),
+			BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R2 / R0 */
+			BPF_LD_IMM64(R2, NUMER),
+			BPF_LD_IMM64(R0, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R2, R0),
+			BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R2 / R1 */
+			BPF_LD_IMM64(R2, NUMER),
+			BPF_LD_IMM64(R1, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R2, R1),
+			BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R1 / R2 */
+			BPF_LD_IMM64(R1, NUMER),
+			BPF_LD_IMM64(R2, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R1, R2),
+			BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			BPF_LD_IMM64(R0, 1),
+			/* R1 / R1 */
+			BPF_LD_IMM64(R1, NUMER),
+			BPF_ALU64_REG(BPF_DIV, R1, R1),
+			BPF_JMP_IMM(BPF_JEQ, R1, 1, 1),
+			BPF_EXIT_INSN(),
+			/* R2 / R2 */
+			BPF_LD_IMM64(R2, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R2, R2),
+			BPF_JMP_IMM(BPF_JEQ, R2, 1, 1),
+			BPF_EXIT_INSN(),
+			/* R3 / R4 */
+			BPF_LD_IMM64(R3, NUMER),
+			BPF_LD_IMM64(R4, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R3, R4),
+			BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* Successful return */
+			BPF_LD_IMM64(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } },
+#undef NUMER
+#undef DENOM
+	},
 	{
 		"check: missing ret",
 		.u.insns = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 09/14] bpf/tests: add word-order tests for load/store of double words
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (7 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 08/14] bpf/tests: add tests for ALU operations implemented with function calls Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 10/14] bpf/tests: add branch conversion JIT test Johan Almbladh
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

A double word (64-bit) load/store may be implemented as two successive
32-bit operations, one for each word. Check that the order of those
operations is consistent with the machine endianness.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 1115e39630ce..8b94902702ed 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5417,6 +5417,42 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffffffff } },
 		.stack_depth = 40,
 	},
+	{
+		"STX_MEM_DW: Store double word: first word in memory",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0),
+			BPF_LD_IMM64(R1, 0x0123456789abcdefLL),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+#ifdef __BIG_ENDIAN
+		{ { 0, 0x01234567 } },
+#else
+		{ { 0, 0x89abcdef } },
+#endif
+		.stack_depth = 40,
+	},
+	{
+		"STX_MEM_DW: Store double word: second word in memory",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0),
+			BPF_LD_IMM64(R1, 0x0123456789abcdefLL),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+#ifdef __BIG_ENDIAN
+		{ { 0, 0x89abcdef } },
+#else
+		{ { 0, 0x01234567 } },
+#endif
+		.stack_depth = 40,
+	},
 	/* BPF_STX | BPF_ATOMIC | BPF_W/DW */
 	{
 		"STX_XADD_W: Test: 0x12 + 0x10 = 0x22",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 10/14] bpf/tests: add branch conversion JIT test
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (8 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 09/14] bpf/tests: add word-order tests for load/store of double words Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 11/14] bpf/tests: add test for 32-bit context pointer argument passing Johan Almbladh
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Some JITs may need to convert a conditional jump instruction to
to short PC-relative branch and a long unconditional jump, if the
PC-relative offset exceeds offset field width in the CPU instruction.
This test triggers such branch conversion on the 32-bit MIPS JIT.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 8b94902702ed..55914b6236aa 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -461,6 +461,36 @@ static int bpf_fill_stxdw(struct bpf_test *self)
 	return __bpf_fill_stxdw(self, BPF_DW);
 }
 
+static int bpf_fill_long_jmp(struct bpf_test *self)
+{
+	unsigned int len = BPF_MAXINSNS;
+	struct bpf_insn *insn;
+	int i;
+
+	insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
+	if (!insn)
+		return -ENOMEM;
+
+	insn[0] = BPF_ALU64_IMM(BPF_MOV, R0, 1);
+	insn[1] = BPF_JMP_IMM(BPF_JEQ, R0, 1, len - 2 - 1);
+
+	/*
+	 * Fill with a complex 64-bit operation that expands to a lot of
+	 * instructions on 32-bit JITs. The large jump offset can then
+	 * overflow the conditional branch field size, triggering a branch
+	 * conversion mechanism in some JITs.
+	 */
+	for (i = 2; i < len - 1; i++)
+		insn[i] = BPF_ALU64_IMM(BPF_MUL, R0, (i << 16) + i);
+
+	insn[len - 1] = BPF_EXIT_INSN();
+
+	self->u.ptr.insns = insn;
+	self->u.ptr.len = len;
+
+	return 0;
+}
+
 static struct bpf_test tests[] = {
 	{
 		"TAX",
@@ -6892,6 +6922,14 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{	/* Mainly checking JIT here. */
+		"BPF_MAXINSNS: Very long conditional jump",
+		{ },
+		INTERNAL | FLAG_NO_DATA,
+		{ },
+		{ { 0, 1 } },
+		.fill_helper = bpf_fill_long_jmp,
+	},
 	{
 		"JMP_JA: Jump, gap, jump, ...",
 		{ },
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 11/14] bpf/tests: add test for 32-bit context pointer argument passing
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (9 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 10/14] bpf/tests: add branch conversion JIT test Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 12/14] bpf/tests: add tests for atomic operations Johan Almbladh
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

On a 32-bit architecture, the context pointer should occupy the low
half of R0, and the other half should be zero.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 55914b6236aa..314af6eaeb92 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -2084,6 +2084,22 @@ static struct bpf_test tests[] = {
 #undef NUMER
 #undef DENOM
 	},
+#ifdef CONFIG_32BIT
+	{
+		"INT: 32-bit context pointer word order and zero-extension",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_JMP32_IMM(BPF_JEQ, R1, 0, 3),
+			BPF_ALU64_IMM(BPF_RSH, R1, 32),
+			BPF_JMP32_IMM(BPF_JNE, R1, 0, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+#endif
 	{
 		"check: missing ret",
 		.u.insns = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 12/14] bpf/tests: add tests for atomic operations
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (10 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 11/14] bpf/tests: add test for 32-bit context pointer argument passing Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 13/14] bpf/tests: add tests for BPF_CMPXCHG Johan Almbladh
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Tests for each atomic arithmetic operation and BPF_XCHG, derived from
old BPF_XADD tests. The tests include BPF_W/DW and BPF_FETCH variants.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 252 ++++++++++++++++++++++++++++++++-----------------
 1 file changed, 166 insertions(+), 86 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 314af6eaeb92..ac50cb023324 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5500,49 +5500,6 @@ static struct bpf_test tests[] = {
 		.stack_depth = 40,
 	},
 	/* BPF_STX | BPF_ATOMIC | BPF_W/DW */
-	{
-		"STX_XADD_W: Test: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_W, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_W, BPF_ADD, R10, R0, -40),
-			BPF_LDX_MEM(BPF_W, R0, R10, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x22 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_W: Test side-effects, r10: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU64_REG(BPF_MOV, R1, R10),
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_W, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_W, BPF_ADD, R10, R0, -40),
-			BPF_ALU64_REG(BPF_MOV, R0, R10),
-			BPF_ALU64_REG(BPF_SUB, R0, R1),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_W: Test side-effects, r0: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_W, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_W, BPF_ADD, R10, R0, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x12 } },
-		.stack_depth = 40,
-	},
 	{
 		"STX_XADD_W: X + 1 + 1 + 1 + ...",
 		{ },
@@ -5551,49 +5508,6 @@ static struct bpf_test tests[] = {
 		{ { 0, 4134 } },
 		.fill_helper = bpf_fill_stxw,
 	},
-	{
-		"STX_XADD_DW: Test: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_DW, BPF_ADD, R10, R0, -40),
-			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x22 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_DW: Test side-effects, r10: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU64_REG(BPF_MOV, R1, R10),
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_DW, BPF_ADD, R10, R0, -40),
-			BPF_ALU64_REG(BPF_MOV, R0, R10),
-			BPF_ALU64_REG(BPF_SUB, R0, R1),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_DW: Test side-effects, r0: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_DW, BPF_ADD, R10, R0, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x12 } },
-		.stack_depth = 40,
-	},
 	{
 		"STX_XADD_DW: X + 1 + 1 + 1 + ...",
 		{ },
@@ -5602,6 +5516,172 @@ static struct bpf_test tests[] = {
 		{ { 0, 4134 } },
 		.fill_helper = bpf_fill_stxdw,
 	},
+	/*
+	 * Exhaustive tests of atomic operation variants.
+	 * Individual tests are expanded from template macros for all
+	 * combinations of ALU operation, word size and fetching.
+	 */
+#define BPF_ATOMIC_OP_TEST1(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test: "			\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU32_IMM(BPF_MOV, R5, update),			\
+		BPF_ST_MEM(width, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R5, -40),			\
+		BPF_LDX_MEM(width, R0, R10, -40),			\
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,							\
+	{ },								\
+	{ { 0, result } },						\
+	.stack_depth = 40,						\
+}
+#define BPF_ATOMIC_OP_TEST2(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test side effects, r10: "	\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU64_REG(BPF_MOV, R1, R10),			\
+		BPF_ALU32_IMM(BPF_MOV, R0, update),			\
+		BPF_ST_MEM(BPF_W, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R0, -40),			\
+		BPF_ALU64_REG(BPF_MOV, R0, R10),			\
+		BPF_ALU64_REG(BPF_SUB, R0, R1),				\
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,							\
+	{ },								\
+	{ { 0, 0 } },							\
+	.stack_depth = 40,						\
+}
+#define BPF_ATOMIC_OP_TEST3(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test side effects, r0: "	\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU64_REG(BPF_MOV, R0, R10),			\
+		BPF_ALU32_IMM(BPF_MOV, R1, update),			\
+		BPF_ST_MEM(width, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R1, -40),			\
+		BPF_ALU64_REG(BPF_SUB, R0, R10),			\
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,                                                       \
+	{ },                                                            \
+	{ { 0, 0 } },                                                   \
+	.stack_depth = 40,                                              \
+}
+#define BPF_ATOMIC_OP_TEST4(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test fetch: "		\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU32_IMM(BPF_MOV, R3, update),			\
+		BPF_ST_MEM(width, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R3, -40),			\
+		BPF_ALU64_REG(BPF_MOV, R0, R3),                         \
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,                                                       \
+	{ },                                                            \
+	{ { 0, (op) & BPF_FETCH ? old : update } },			\
+	.stack_depth = 40,                                              \
+}
+	/* BPF_ATOMIC | BPF_W: BPF_ADD */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_W: BPF_ADD | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_DW: BPF_ADD */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_DW: BPF_ADD | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_W: BPF_AND */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_W: BPF_AND | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_DW: BPF_AND */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_DW: BPF_AND | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_W: BPF_OR */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_W: BPF_OR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_DW: BPF_OR */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_DW: BPF_OR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_W: BPF_XOR */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_W: BPF_XOR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_DW: BPF_XOR */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_DW: BPF_XOR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_W: BPF_XCHG */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	/* BPF_ATOMIC | BPF_DW: BPF_XCHG */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+#undef BPF_ATOMIC_OP_TEST1
+#undef BPF_ATOMIC_OP_TEST2
+#undef BPF_ATOMIC_OP_TEST3
+#undef BPF_ATOMIC_OP_TEST4
 	/* BPF_JMP32 | BPF_JEQ | BPF_K */
 	{
 		"JMP32_JEQ_K: Small immediate",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 13/14] bpf/tests: add tests for BPF_CMPXCHG
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (11 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 12/14] bpf/tests: add tests for atomic operations Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26  8:17 ` [RFC PATCH 14/14] bpf/tests: add tail call test suite Johan Almbladh
  2021-07-26 22:53 ` [RFC PATCH 00/14] bpf/tests: Extend the eBPF " Andrii Nakryiko
  14 siblings, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Tests for BPF_CMPXCHG with both word and double word operands. As with
the tests for other atomic operations, these tests only check the result
of the arithmetic operation. The atomicity of the operations is not tested.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 166 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 166 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ac50cb023324..af5758151d0a 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5682,6 +5682,172 @@ static struct bpf_test tests[] = {
 #undef BPF_ATOMIC_OP_TEST2
 #undef BPF_ATOMIC_OP_TEST3
 #undef BPF_ATOMIC_OP_TEST4
+	/* BPF_ATOMIC | BPF_W, BPF_CMPXCHG */
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test successful return",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test successful store",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test failure return",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x76543210),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test failure store",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x76543210),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test side effects",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_ALU32_REG(BPF_MOV, R0, R3),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } },
+		.stack_depth = 40,
+	},
+	/* BPF_ATOMIC | BPF_DW, BPF_CMPXCHG */
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful return",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful store",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_STX_MEM(BPF_DW, R10, R0, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R2),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure return",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_ALU64_IMM(BPF_ADD, R0, 1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure store",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_ALU64_IMM(BPF_ADD, R0, 1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test side effects",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_LD_IMM64(R0, 0xfecdba9876543210ULL),
+			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R2),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
 	/* BPF_JMP32 | BPF_JEQ | BPF_K */
 	{
 		"JMP32_JEQ_K: Small immediate",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC PATCH 14/14] bpf/tests: add tail call test suite
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (12 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 13/14] bpf/tests: add tests for BPF_CMPXCHG Johan Almbladh
@ 2021-07-26  8:17 ` Johan Almbladh
  2021-07-26 11:07   ` kernel test robot
  2021-07-26 21:33   ` kernel test robot
  2021-07-26 22:53 ` [RFC PATCH 00/14] bpf/tests: Extend the eBPF " Andrii Nakryiko
  14 siblings, 2 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-26  8:17 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

While BPF_CALL instructions were tested implicitly by the cBPF-to-eBPF
translation, there has not been any tests for BPF_TAIL_CALL instructions.
The new test suite includes tests for tail call chaining, tail call count
tracking and error paths. It is mainly intended for JIT development and
testing.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 249 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 249 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index af5758151d0a..05ba00049052 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -8981,8 +8981,249 @@ static __init int test_bpf(void)
 	return err_cnt ? -EINVAL : 0;
 }
 
+struct tail_call_test {
+	const char *descr;
+	struct bpf_insn insns[MAX_INSNS];
+	int result;
+	int stack_depth;
+};
+
+/*
+ * Magic marker used in test snippets for tail calls below.
+ * BPF_LD/MOV to R2 and R2 with this immediate value is replaced
+ * with the proper values by the test runner.
+ */
+#define TAIL_CALL_MARKER 0x7a11ca11
+
+/* Special offset to indicate a NULL call target */
+#define TAIL_CALL_NULL 0x7fff
+
+#define TAIL_CALL(offset)			       \
+	BPF_LD_IMM64(R2, TAIL_CALL_MARKER),	       \
+	BPF_RAW_INSN(BPF_ALU | BPF_MOV | BPF_K, R3, 0, \
+		     offset, TAIL_CALL_MARKER),	       \
+	BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0)
+
+/*
+ * Tail call tests. Each test case may call any other test in the table,
+ * including itself, specified as a relative index offset from the calling
+ * test. The index TAIL_CALL_NULL can be used to specify a NULL target
+ * function to test the JIT error path.
+ */
+static struct tail_call_test tail_call_tests[] = {
+	{
+		"Tail call leaf",
+		.insns = {
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_ALU64_IMM(BPF_ADD, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 1,
+	},
+	{
+		"Tail call 2",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 2),
+			TAIL_CALL(-1),
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 3,
+	},
+	{
+		"Tail call 3",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 3),
+			TAIL_CALL(-1),
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 6,
+	},
+	{
+		"Tail call 4",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 4),
+			TAIL_CALL(-1),
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 10,
+	},
+	{
+		"Tail call error path, max count reached",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 1),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			TAIL_CALL(0),
+			BPF_EXIT_INSN(),
+		},
+		.result = MAX_TAIL_CALL_CNT + 2,
+	},
+	{
+		"Tail call error path, NULL target",
+		.insns = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			TAIL_CALL(TAIL_CALL_NULL),
+			BPF_ALU64_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 1,
+	},
+	{
+		/* Must be the last test */
+		"Tail call error path, index out of range",
+		.insns = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			TAIL_CALL(1),    /* Index out of range */
+			BPF_ALU64_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 1,
+	},
+};
+
+static void __init destroy_tail_call_tests(struct bpf_array *progs)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(tail_call_tests); i++)
+		if (progs->ptrs[i])
+			bpf_prog_free(progs->ptrs[i]);
+	kfree(progs);
+}
+
+static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
+{
+	struct bpf_array *progs;
+	int ntests = ARRAY_SIZE(tail_call_tests);
+	int which, err;
+
+	/* Allocate the table of programs to be used for tall calls */
+	progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
+			GFP_KERNEL);
+	if (!progs)
+		goto out_nomem;
+
+	/* Create all eBPF programs and populate the table */
+	for (which = 0; which < ntests; which++) {
+		struct tail_call_test *test = &tail_call_tests[which];
+		struct bpf_prog *fp;
+		int err, len, i;
+
+		/* Compute the number of program instructions */
+		for (len = 0; len < MAX_INSNS; len++) {
+			struct bpf_insn *insn = &test->insns[len];
+
+			if (len < MAX_INSNS - 1 &&
+			    insn->code == (BPF_LD | BPF_DW | BPF_IMM))
+				len++;
+			if (insn->code == 0)
+				break;
+		}
+
+		/* Allocate and initialize the program */
+		fp = bpf_prog_alloc(bpf_prog_size(len), 0);
+		if (!fp)
+			goto out_nomem;
+
+		fp->len = len;
+		fp->type = BPF_PROG_TYPE_SOCKET_FILTER;
+		fp->aux->stack_depth = test->stack_depth;
+		memcpy(fp->insnsi, test->insns, len * sizeof(struct bpf_insn));
+
+		/* Relocate runtime tail call offsets and addresses */
+		for (i = 0; i < len; i++) {
+			struct bpf_insn *insn = &fp->insnsi[i];
+			int target;
+
+			if (insn->imm != TAIL_CALL_MARKER)
+				continue;
+
+			switch (insn->code) {
+			case BPF_LD | BPF_DW | BPF_IMM:
+				if (insn->dst_reg == R2) {
+					insn[0].imm = (u32)progs;
+					insn[1].imm = ((u64)(long)progs) >> 32;
+				}
+				break;
+
+			case BPF_ALU | BPF_MOV | BPF_K:
+			case BPF_ALU64 | BPF_MOV | BPF_K:
+				if (insn->off == TAIL_CALL_NULL)
+					target = ntests;
+				else
+					target = which + insn->off;
+				if (insn->dst_reg == R3)
+					insn->imm = target;
+				break;
+			}
+		}
+
+		fp = bpf_prog_select_runtime(fp, &err);
+		if (err)
+			goto out_err;
+
+		progs->ptrs[which] = fp;
+	}
+
+	/* The last entry contains a NULL program pointer */
+	progs->map.max_entries = ntests + 1;
+	*pprogs = progs;
+	return 0;
+
+out_nomem:
+	err = -ENOMEM;
+
+out_err:
+	if (progs)
+		destroy_tail_call_tests(progs);
+	return err;
+}
+
+static __init int test_tail_calls(struct bpf_array *progs)
+{
+	int i, err_cnt = 0, pass_cnt = 0;
+	int jit_cnt = 0, run_cnt = 0;
+
+	for (i = 0; i < ARRAY_SIZE(tail_call_tests); i++) {
+		struct tail_call_test *test = &tail_call_tests[i];
+		struct bpf_prog *fp = progs->ptrs[i];
+		u64 duration;
+		int ret;
+
+		cond_resched();
+
+		pr_info("#%d %s ", i, test->descr);
+		if (!fp) {
+			err_cnt++;
+			continue;
+		}
+		pr_cont("jited:%u ", fp->jited);
+
+		run_cnt++;
+		if (fp->jited)
+			jit_cnt++;
+
+		ret = __run_one(fp, NULL, MAX_TESTRUNS, &duration);
+		if (ret == test->result) {
+			pr_cont("%lld PASS", duration);
+			pass_cnt++;
+		} else {
+			pr_cont("ret %d != %d FAIL", ret, test->result);
+			err_cnt++;
+		}
+	}
+
+	pr_info("%s: Summary: %d PASSED, %d FAILED, [%d/%d JIT'ed]\n",
+		__func__, pass_cnt, err_cnt, jit_cnt, run_cnt);
+
+	return err_cnt ? -EINVAL : 0;
+}
+
 static int __init test_bpf_init(void)
 {
+	struct bpf_array *progs = NULL;
 	int ret;
 
 	ret = prepare_bpf_tests();
@@ -8994,6 +9235,14 @@ static int __init test_bpf_init(void)
 	if (ret)
 		return ret;
 
+	ret = prepare_tail_call_tests(&progs);
+	if (ret)
+		return ret;
+	ret = test_tail_calls(progs);
+	destroy_tail_call_tests(progs);
+	if (ret)
+		return ret;
+
 	return test_skb_segment();
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [RFC PATCH 14/14] bpf/tests: add tail call test suite
  2021-07-26  8:17 ` [RFC PATCH 14/14] bpf/tests: add tail call test suite Johan Almbladh
@ 2021-07-26 11:07   ` kernel test robot
  2021-07-26 21:33   ` kernel test robot
  1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-07-26 11:07 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 4673 bytes --]

Hi Johan,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on bpf-next/master]
[also build test WARNING on bpf/master net/master ipvs/master net-next/master v5.14-rc3 next-20210723]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Johan-Almbladh/bpf-tests-Extend-the-eBPF-test-suite/20210726-162045
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: s390-allyesconfig (attached as .config)
compiler: s390-linux-gcc (GCC) 10.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/d33abdcea7dcc6b04b87076326e76adf0655f617
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Johan-Almbladh/bpf-tests-Extend-the-eBPF-test-suite/20210726-162045
        git checkout d33abdcea7dcc6b04b87076326e76adf0655f617
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-10.3.0 make.cross ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   lib/test_bpf.c: In function 'prepare_tail_call_tests':
>> lib/test_bpf.c:9146:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
    9146 |      insn[0].imm = (u32)progs;
         |                    ^


vim +9146 lib/test_bpf.c

  9095	
  9096	static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
  9097	{
  9098		struct bpf_array *progs;
  9099		int ntests = ARRAY_SIZE(tail_call_tests);
  9100		int which, err;
  9101	
  9102		/* Allocate the table of programs to be used for tall calls */
  9103		progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
  9104				GFP_KERNEL);
  9105		if (!progs)
  9106			goto out_nomem;
  9107	
  9108		/* Create all eBPF programs and populate the table */
  9109		for (which = 0; which < ntests; which++) {
  9110			struct tail_call_test *test = &tail_call_tests[which];
  9111			struct bpf_prog *fp;
  9112			int err, len, i;
  9113	
  9114			/* Compute the number of program instructions */
  9115			for (len = 0; len < MAX_INSNS; len++) {
  9116				struct bpf_insn *insn = &test->insns[len];
  9117	
  9118				if (len < MAX_INSNS - 1 &&
  9119				    insn->code == (BPF_LD | BPF_DW | BPF_IMM))
  9120					len++;
  9121				if (insn->code == 0)
  9122					break;
  9123			}
  9124	
  9125			/* Allocate and initialize the program */
  9126			fp = bpf_prog_alloc(bpf_prog_size(len), 0);
  9127			if (!fp)
  9128				goto out_nomem;
  9129	
  9130			fp->len = len;
  9131			fp->type = BPF_PROG_TYPE_SOCKET_FILTER;
  9132			fp->aux->stack_depth = test->stack_depth;
  9133			memcpy(fp->insnsi, test->insns, len * sizeof(struct bpf_insn));
  9134	
  9135			/* Relocate runtime tail call offsets and addresses */
  9136			for (i = 0; i < len; i++) {
  9137				struct bpf_insn *insn = &fp->insnsi[i];
  9138				int target;
  9139	
  9140				if (insn->imm != TAIL_CALL_MARKER)
  9141					continue;
  9142	
  9143				switch (insn->code) {
  9144				case BPF_LD | BPF_DW | BPF_IMM:
  9145					if (insn->dst_reg == R2) {
> 9146						insn[0].imm = (u32)progs;
  9147						insn[1].imm = ((u64)(long)progs) >> 32;
  9148					}
  9149					break;
  9150	
  9151				case BPF_ALU | BPF_MOV | BPF_K:
  9152				case BPF_ALU64 | BPF_MOV | BPF_K:
  9153					if (insn->off == TAIL_CALL_NULL)
  9154						target = ntests;
  9155					else
  9156						target = which + insn->off;
  9157					if (insn->dst_reg == R3)
  9158						insn->imm = target;
  9159					break;
  9160				}
  9161			}
  9162	
  9163			fp = bpf_prog_select_runtime(fp, &err);
  9164			if (err)
  9165				goto out_err;
  9166	
  9167			progs->ptrs[which] = fp;
  9168		}
  9169	
  9170		/* The last entry contains a NULL program pointer */
  9171		progs->map.max_entries = ntests + 1;
  9172		*pprogs = progs;
  9173		return 0;
  9174	
  9175	out_nomem:
  9176		err = -ENOMEM;
  9177	
  9178	out_err:
  9179		if (progs)
  9180			destroy_tail_call_tests(progs);
  9181		return err;
  9182	}
  9183	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 66778 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC PATCH 14/14] bpf/tests: add tail call test suite
  2021-07-26  8:17 ` [RFC PATCH 14/14] bpf/tests: add tail call test suite Johan Almbladh
  2021-07-26 11:07   ` kernel test robot
@ 2021-07-26 21:33   ` kernel test robot
  1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-07-26 21:33 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 11732 bytes --]

Hi Johan,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on bpf-next/master]
[also build test WARNING on bpf/master net/master ipvs/master net-next/master v5.14-rc3 next-20210723]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Johan-Almbladh/bpf-tests-Extend-the-eBPF-test-suite/20210726-162045
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: powerpc-randconfig-r014-20210726 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project c63dbd850182797bc4b76124d08e1c320ab2365d)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc cross compiling tool for clang build
        # apt-get install binutils-powerpc-linux-gnu
        # https://github.com/0day-ci/linux/commit/d33abdcea7dcc6b04b87076326e76adf0655f617
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Johan-Almbladh/bpf-tests-Extend-the-eBPF-test-suite/20210726-162045
        git checkout d33abdcea7dcc6b04b87076326e76adf0655f617
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:114:1: note: expanded from here
   __do_insb
   ^
   arch/powerpc/include/asm/io.h:556:56: note: expanded from macro '__do_insb'
   #define __do_insb(p, b, n)      readsb((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
                                          ~~~~~~~~~~~~~~~~~~~~~^
   In file included from lib/test_bpf.c:12:
   In file included from include/linux/filter.h:13:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:45:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:116:1: note: expanded from here
   __do_insw
   ^
   arch/powerpc/include/asm/io.h:557:56: note: expanded from macro '__do_insw'
   #define __do_insw(p, b, n)      readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
                                          ~~~~~~~~~~~~~~~~~~~~~^
   In file included from lib/test_bpf.c:12:
   In file included from include/linux/filter.h:13:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:47:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:118:1: note: expanded from here
   __do_insl
   ^
   arch/powerpc/include/asm/io.h:558:56: note: expanded from macro '__do_insl'
   #define __do_insl(p, b, n)      readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
                                          ~~~~~~~~~~~~~~~~~~~~~^
   In file included from lib/test_bpf.c:12:
   In file included from include/linux/filter.h:13:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:49:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:120:1: note: expanded from here
   __do_outsb
   ^
   arch/powerpc/include/asm/io.h:559:58: note: expanded from macro '__do_outsb'
   #define __do_outsb(p, b, n)     writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
                                           ~~~~~~~~~~~~~~~~~~~~~^
   In file included from lib/test_bpf.c:12:
   In file included from include/linux/filter.h:13:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:51:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:122:1: note: expanded from here
   __do_outsw
   ^
   arch/powerpc/include/asm/io.h:560:58: note: expanded from macro '__do_outsw'
   #define __do_outsw(p, b, n)     writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
                                           ~~~~~~~~~~~~~~~~~~~~~^
   In file included from lib/test_bpf.c:12:
   In file included from include/linux/filter.h:13:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:53:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:124:1: note: expanded from here
   __do_outsl
   ^
   arch/powerpc/include/asm/io.h:561:58: note: expanded from macro '__do_outsl'
   #define __do_outsl(p, b, n)     writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
                                           ~~~~~~~~~~~~~~~~~~~~~^
>> lib/test_bpf.c:9146:20: warning: cast to smaller integer type 'u32' (aka 'unsigned int') from 'struct bpf_array *' [-Wpointer-to-int-cast]
                                           insn[0].imm = (u32)progs;
                                                         ^~~~~~~~~~
>> lib/test_bpf.c:9164:7: warning: variable 'err' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
                   if (err)
                       ^~~
   lib/test_bpf.c:9181:9: note: uninitialized use occurs here
           return err;
                  ^~~
   lib/test_bpf.c:9164:3: note: remove the 'if' if its condition is always false
                   if (err)
                   ^~~~~~~~
   lib/test_bpf.c:9100:16: note: initialize the variable 'err' to silence this warning
           int which, err;
                         ^
                          = 0
   14 warnings generated.


vim +9146 lib/test_bpf.c

  9095	
  9096	static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
  9097	{
  9098		struct bpf_array *progs;
  9099		int ntests = ARRAY_SIZE(tail_call_tests);
  9100		int which, err;
  9101	
  9102		/* Allocate the table of programs to be used for tall calls */
  9103		progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
  9104				GFP_KERNEL);
  9105		if (!progs)
  9106			goto out_nomem;
  9107	
  9108		/* Create all eBPF programs and populate the table */
  9109		for (which = 0; which < ntests; which++) {
  9110			struct tail_call_test *test = &tail_call_tests[which];
  9111			struct bpf_prog *fp;
  9112			int err, len, i;
  9113	
  9114			/* Compute the number of program instructions */
  9115			for (len = 0; len < MAX_INSNS; len++) {
  9116				struct bpf_insn *insn = &test->insns[len];
  9117	
  9118				if (len < MAX_INSNS - 1 &&
  9119				    insn->code == (BPF_LD | BPF_DW | BPF_IMM))
  9120					len++;
  9121				if (insn->code == 0)
  9122					break;
  9123			}
  9124	
  9125			/* Allocate and initialize the program */
  9126			fp = bpf_prog_alloc(bpf_prog_size(len), 0);
  9127			if (!fp)
  9128				goto out_nomem;
  9129	
  9130			fp->len = len;
  9131			fp->type = BPF_PROG_TYPE_SOCKET_FILTER;
  9132			fp->aux->stack_depth = test->stack_depth;
  9133			memcpy(fp->insnsi, test->insns, len * sizeof(struct bpf_insn));
  9134	
  9135			/* Relocate runtime tail call offsets and addresses */
  9136			for (i = 0; i < len; i++) {
  9137				struct bpf_insn *insn = &fp->insnsi[i];
  9138				int target;
  9139	
  9140				if (insn->imm != TAIL_CALL_MARKER)
  9141					continue;
  9142	
  9143				switch (insn->code) {
  9144				case BPF_LD | BPF_DW | BPF_IMM:
  9145					if (insn->dst_reg == R2) {
> 9146						insn[0].imm = (u32)progs;
  9147						insn[1].imm = ((u64)(long)progs) >> 32;
  9148					}
  9149					break;
  9150	
  9151				case BPF_ALU | BPF_MOV | BPF_K:
  9152				case BPF_ALU64 | BPF_MOV | BPF_K:
  9153					if (insn->off == TAIL_CALL_NULL)
  9154						target = ntests;
  9155					else
  9156						target = which + insn->off;
  9157					if (insn->dst_reg == R3)
  9158						insn->imm = target;
  9159					break;
  9160				}
  9161			}
  9162	
  9163			fp = bpf_prog_select_runtime(fp, &err);
> 9164			if (err)
  9165				goto out_err;
  9166	
  9167			progs->ptrs[which] = fp;
  9168		}
  9169	
  9170		/* The last entry contains a NULL program pointer */
  9171		progs->map.max_entries = ntests + 1;
  9172		*pprogs = progs;
  9173		return 0;
  9174	
  9175	out_nomem:
  9176		err = -ENOMEM;
  9177	
  9178	out_err:
  9179		if (progs)
  9180			destroy_tail_call_tests(progs);
  9181		return err;
  9182	}
  9183	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 31939 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite
  2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (13 preceding siblings ...)
  2021-07-26  8:17 ` [RFC PATCH 14/14] bpf/tests: add tail call test suite Johan Almbladh
@ 2021-07-26 22:53 ` Andrii Nakryiko
  2021-07-28  8:27   ` Daniel Borkmann
  14 siblings, 1 reply; 29+ messages in thread
From: Andrii Nakryiko @ 2021-07-26 22:53 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin Lau,
	Song Liu, Yonghong Song, john fastabend, KP Singh, Tony Ambardar,
	Networking, bpf

On Mon, Jul 26, 2021 at 1:18 AM Johan Almbladh
<johan.almbladh@anyfinetworks.com> wrote:
>
> Greetings,
>
> During my work with the 32-bit MIPS JIT implementation I also added a
> number of new test cases in the test_bpf kernel module. I found it
> valuable to be able to throughly test the JIT on a low level with
> minimum dependency on user space tooling. If you think it would be useful,
> I have prepared a patch set with my additions. I have verified it on
> x86_64 and i386, with/without JIT and JIT hardening. The interpreter
> passes all tests. The JITs do too, with one exception, see NOTE below.
> The result for the x86_64 JIT is summarized below.
>
>     test_bpf: Summary: 577 PASSED, 0 FAILED, [565/565 JIT'ed]
>     test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]
>
> I have inserted the new tests in the location where related tests are run,
> rather than putting them at the end. I have also tried to use the same
> description style as the surrounding tests. Below is a summary of the
> new tests.
>
> * Operations not previously covered
>   JMP32, ALU32 ARSH, remaining ATOMIC operations including
>   XCHG and CMPXCHG.
>
> * ALU operations with edge cases
>   32-bit JITs implement ALU64 operations with two 32-bit registers per
>   operand. Even "trivial" operations like bit shifts are non-trivial to
>   implement. Test different input values that may trigger different JIT
>   code paths. JITs may also implement BPF_K operations differently
>   depending on if the immediate fits the corresponding field width of the
>   native CPU instruction or not, so test that too.
>
> * Word order in load/store
>   The word order should follow endianness. Test that DW load/store
>   operations result in the expected word order in memory.
>
> * 32-bit eBPF argument zero extension
>   On a 32-bit JIT the eBPF argument is a 32-bit pointer. If passed in
>   a CPU register only one register in the mapped pair contains valid
>   data. Verify that value is properly zero-extended.
>
> * Long conditional jumps
>   Test to trigger the relative-to-absolute branch conversion in MIPS JITs,
>   when the PC-relative offset overflows the field width of the MIPS branch
>   instruction.
>
> * Tail calls
>   A new test suite to test tail calls. Also test error paths and TCC
>   limit.
>
> NOTE: There is a minor discrepancy between the interpreter and the
> (x86) JITs. With MAX_TAIL_CALL_CNT = 32, the interpreter seems to allow
> up to 33 tail calls, whereas the JITs stop at 32. This causes the max TCC

Given the intended case was to allow 32, let's fix up the interpreter
to be in line with JITs?

> test to fail for the JITs, since I used the interpreter as reference.
> Either we change the interpreter behavior, change the JITs, or relax the
> test to allow both behaviors.
>
> Let me know what you think.
>
> Cheers,
> Johan
>
> Johan Almbladh (14):
>   bpf/tests: add BPF_JMP32 test cases
>   bpf/tests: add BPF_MOV tests for zero and sign extension
>   bpf/tests: fix typos in test case descriptions
>   bpf/tests: add more tests of ALU32 and ALU64 bitwise operations
>   bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH
>   bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64
>   bpf/tests: add more ALU64 BPF_MUL tests
>   bpf/tests: add tests for ALU operations implemented with function
>     calls
>   bpf/tests: add word-order tests for load/store of double words
>   bpf/tests: add branch conversion JIT test
>   bpf/tests: add test for 32-bit context pointer argument passing
>   bpf/tests: add tests for atomic operations
>   bpf/tests: add tests for BPF_CMPXCHG
>   bpf/tests: add tail call test suite
>
>  lib/test_bpf.c | 2732 +++++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 2475 insertions(+), 257 deletions(-)
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite
  2021-07-26 22:53 ` [RFC PATCH 00/14] bpf/tests: Extend the eBPF " Andrii Nakryiko
@ 2021-07-28  8:27   ` Daniel Borkmann
  2021-07-28 12:15     ` Johan Almbladh
  2021-07-28 16:47     ` [PATCH] bpf: Fix off-by-one in tail call count limiting Johan Almbladh
  0 siblings, 2 replies; 29+ messages in thread
From: Daniel Borkmann @ 2021-07-28  8:27 UTC (permalink / raw)
  To: Andrii Nakryiko, Johan Almbladh
  Cc: Alexei Starovoitov, Andrii Nakryiko, Martin Lau, Song Liu,
	Yonghong Song, john fastabend, KP Singh, Tony Ambardar,
	Networking, bpf

On 7/27/21 12:53 AM, Andrii Nakryiko wrote:
> On Mon, Jul 26, 2021 at 1:18 AM Johan Almbladh
> <johan.almbladh@anyfinetworks.com> wrote:
>>
>> Greetings,
>>
>> During my work with the 32-bit MIPS JIT implementation I also added a
>> number of new test cases in the test_bpf kernel module. I found it
>> valuable to be able to throughly test the JIT on a low level with
>> minimum dependency on user space tooling. If you think it would be useful,
>> I have prepared a patch set with my additions. I have verified it on
>> x86_64 and i386, with/without JIT and JIT hardening. The interpreter
>> passes all tests. The JITs do too, with one exception, see NOTE below.
>> The result for the x86_64 JIT is summarized below.
>>
>>      test_bpf: Summary: 577 PASSED, 0 FAILED, [565/565 JIT'ed]
>>      test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]
>>
>> I have inserted the new tests in the location where related tests are run,
>> rather than putting them at the end. I have also tried to use the same
>> description style as the surrounding tests. Below is a summary of the
>> new tests.
>>
>> * Operations not previously covered
>>    JMP32, ALU32 ARSH, remaining ATOMIC operations including
>>    XCHG and CMPXCHG.
>>
>> * ALU operations with edge cases
>>    32-bit JITs implement ALU64 operations with two 32-bit registers per
>>    operand. Even "trivial" operations like bit shifts are non-trivial to
>>    implement. Test different input values that may trigger different JIT
>>    code paths. JITs may also implement BPF_K operations differently
>>    depending on if the immediate fits the corresponding field width of the
>>    native CPU instruction or not, so test that too.
>>
>> * Word order in load/store
>>    The word order should follow endianness. Test that DW load/store
>>    operations result in the expected word order in memory.
>>
>> * 32-bit eBPF argument zero extension
>>    On a 32-bit JIT the eBPF argument is a 32-bit pointer. If passed in
>>    a CPU register only one register in the mapped pair contains valid
>>    data. Verify that value is properly zero-extended.
>>
>> * Long conditional jumps
>>    Test to trigger the relative-to-absolute branch conversion in MIPS JITs,
>>    when the PC-relative offset overflows the field width of the MIPS branch
>>    instruction.
>>
>> * Tail calls
>>    A new test suite to test tail calls. Also test error paths and TCC
>>    limit.
>>
>> NOTE: There is a minor discrepancy between the interpreter and the
>> (x86) JITs. With MAX_TAIL_CALL_CNT = 32, the interpreter seems to allow
>> up to 33 tail calls, whereas the JITs stop at 32. This causes the max TCC
> 
> Given the intended case was to allow 32, let's fix up the interpreter
> to be in line with JITs?

Yes, lets fix up the interpreter.

Could you send a fix for the latter, Johan, along with this series?

Big thanks for adding all the new tests by the way!

>> test to fail for the JITs, since I used the interpreter as reference.
>> Either we change the interpreter behavior, change the JITs, or relax the
>> test to allow both behaviors.
>>
>> Let me know what you think.
>>
>> Cheers,
>> Johan
>>
>> Johan Almbladh (14):
>>    bpf/tests: add BPF_JMP32 test cases
>>    bpf/tests: add BPF_MOV tests for zero and sign extension
>>    bpf/tests: fix typos in test case descriptions
>>    bpf/tests: add more tests of ALU32 and ALU64 bitwise operations
>>    bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH
>>    bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64
>>    bpf/tests: add more ALU64 BPF_MUL tests
>>    bpf/tests: add tests for ALU operations implemented with function
>>      calls
>>    bpf/tests: add word-order tests for load/store of double words
>>    bpf/tests: add branch conversion JIT test
>>    bpf/tests: add test for 32-bit context pointer argument passing
>>    bpf/tests: add tests for atomic operations
>>    bpf/tests: add tests for BPF_CMPXCHG
>>    bpf/tests: add tail call test suite
>>
>>   lib/test_bpf.c | 2732 +++++++++++++++++++++++++++++++++++++++++++-----
>>   1 file changed, 2475 insertions(+), 257 deletions(-)
>>
>> --
>> 2.25.1
>>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite
  2021-07-28  8:27   ` Daniel Borkmann
@ 2021-07-28 12:15     ` Johan Almbladh
  2021-07-28 16:47     ` [PATCH] bpf: Fix off-by-one in tail call count limiting Johan Almbladh
  1 sibling, 0 replies; 29+ messages in thread
From: Johan Almbladh @ 2021-07-28 12:15 UTC (permalink / raw)
  To: Daniel Borkmann, Andrii Nakryiko
  Cc: Alexei Starovoitov, Andrii Nakryiko, Martin Lau, Song Liu,
	Yonghong Song, john fastabend, KP Singh, Tony Ambardar,
	Networking, bpf

On Wed, Jul 28, 2021 at 10:27 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
> Yes, lets fix up the interpreter.
>
> Could you send a fix for the latter, Johan, along with this series?

Thanks for the comments, Andrii and Daniel. Yes, I agree that fixing
up the interpreter makes most sense. I'll submit the patches shortly.

Johan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-07-28  8:27   ` Daniel Borkmann
  2021-07-28 12:15     ` Johan Almbladh
@ 2021-07-28 16:47     ` Johan Almbladh
  2021-07-28 19:13       ` Yonghong Song
  1 sibling, 1 reply; 29+ messages in thread
From: Johan Almbladh @ 2021-07-28 16:47 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Before, the interpreter allowed up to MAX_TAIL_CALL_CNT + 1 tail calls.
Now precisely MAX_TAIL_CALL_CNT is allowed, which is in line with the
behavior of the x86 JITs.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 kernel/bpf/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 9b1577498373..67682b3afc84 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1559,7 +1559,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 
 		if (unlikely(index >= array->map.max_entries))
 			goto out;
-		if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT))
+		if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT))
 			goto out;
 
 		tail_call_cnt++;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-07-28 16:47     ` [PATCH] bpf: Fix off-by-one in tail call count limiting Johan Almbladh
@ 2021-07-28 19:13       ` Yonghong Song
  2021-07-29 21:37         ` Johan Almbladh
  0 siblings, 1 reply; 29+ messages in thread
From: Yonghong Song @ 2021-07-28 19:13 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 9:47 AM, Johan Almbladh wrote:
> Before, the interpreter allowed up to MAX_TAIL_CALL_CNT + 1 tail calls.
> Now precisely MAX_TAIL_CALL_CNT is allowed, which is in line with the
> behavior of the x86 JITs.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

LGTM.

Acked-by: Yonghong Song <yhs@fb.com>

I also checked arm/arm64 jit. I saw the following comments:

         /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
          *      goto out;
          * tail_call_cnt++;
          */

Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
for arm/arm64 jit?

> ---
>   kernel/bpf/core.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 9b1577498373..67682b3afc84 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1559,7 +1559,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
>   
>   		if (unlikely(index >= array->map.max_entries))
>   			goto out;
> -		if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT))
> +		if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT))
>   			goto out;
>   
>   		tail_call_cnt++;
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-07-28 19:13       ` Yonghong Song
@ 2021-07-29 21:37         ` Johan Almbladh
  2021-07-29 22:29           ` Andrii Nakryiko
  0 siblings, 1 reply; 29+ messages in thread
From: Johan Almbladh @ 2021-07-29 21:37 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> I also checked arm/arm64 jit. I saw the following comments:
>
>          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
>           *      goto out;
>           * tail_call_cnt++;
>           */
>
> Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> for arm/arm64 jit?

That wouldn't be unreasonable. I don't have an arm or arm64 setup
available right now, but I can try to test it in qemu.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-07-29 21:37         ` Johan Almbladh
@ 2021-07-29 22:29           ` Andrii Nakryiko
  2021-07-29 22:48             ` Andrii Nakryiko
  0 siblings, 1 reply; 29+ messages in thread
From: Andrii Nakryiko @ 2021-07-29 22:29 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 2:38 PM Johan Almbladh
<johan.almbladh@anyfinetworks.com> wrote:
>
> On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> > I also checked arm/arm64 jit. I saw the following comments:
> >
> >          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> >           *      goto out;
> >           * tail_call_cnt++;
> >           */
> >
> > Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> > for arm/arm64 jit?
>
> That wouldn't be unreasonable. I don't have an arm or arm64 setup
> available right now, but I can try to test it in qemu.

On a brief check, there seems to be quite a mess in terms of the code
and comments.

E.g., in arch/x86/net/bpf_jit_comp32.c:

        /*
         * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
         *     goto out;
         */

                            ^^^^ here comment is wrong

        [...]

        /* cmp edx,hi */
        EMIT3(0x83, add_1reg(0xF8, IA32_EBX), hi);
        EMIT2(IA32_JNE, 3);
        /* cmp ecx,lo */
        EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);

        /* ja out */
        EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));

        ^^^ JAE is >=, right? But the comment says JA.


As for arch/x86/net/bpf_jit_comp.c, both comment and the code seem to
do > MAX_TAIL_CALL_CNT, but you are saying JIT is correct. What am I
missing?

Can you please check all the places where MAX_TAIL_CALL_CNT is used
throughout the code? Let's clean this up in one go.

Also, given it's so easy to do this off-by-one error, can you please
add a negative test validating that 33 tail calls are not allowed? I
assume we have a positive test that allows exactly MAX_TAIL_CALL_CNT,
but please double-check that as well.

I also wonder if it would make sense to convert these
internal-but-sort-of-advertised constants like MAX_TAIL_CALL_CNT and
BPF_COMPLEXITY_LIMIT_INSNS into enums so that they can be "discovered"
from BTF. This should be discussed/attempted outside of this fix,
though. Just bringing it up here.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-07-29 22:29           ` Andrii Nakryiko
@ 2021-07-29 22:48             ` Andrii Nakryiko
  2021-08-01  8:37               ` Johan Almbladh
  0 siblings, 1 reply; 29+ messages in thread
From: Andrii Nakryiko @ 2021-07-29 22:48 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 3:29 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Jul 29, 2021 at 2:38 PM Johan Almbladh
> <johan.almbladh@anyfinetworks.com> wrote:
> >
> > On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> > > I also checked arm/arm64 jit. I saw the following comments:
> > >
> > >          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > >           *      goto out;
> > >           * tail_call_cnt++;
> > >           */
> > >
> > > Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> > > for arm/arm64 jit?
> >
> > That wouldn't be unreasonable. I don't have an arm or arm64 setup
> > available right now, but I can try to test it in qemu.
>
> On a brief check, there seems to be quite a mess in terms of the code
> and comments.
>
> E.g., in arch/x86/net/bpf_jit_comp32.c:
>
>         /*
>          * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
>          *     goto out;
>          */
>
>                             ^^^^ here comment is wrong
>
>         [...]
>
>         /* cmp edx,hi */
>         EMIT3(0x83, add_1reg(0xF8, IA32_EBX), hi);
>         EMIT2(IA32_JNE, 3);
>         /* cmp ecx,lo */
>         EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);
>
>         /* ja out */
>         EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));
>
>         ^^^ JAE is >=, right? But the comment says JA.
>
>
> As for arch/x86/net/bpf_jit_comp.c, both comment and the code seem to
> do > MAX_TAIL_CALL_CNT, but you are saying JIT is correct. What am I
> missing?
>
> Can you please check all the places where MAX_TAIL_CALL_CNT is used
> throughout the code? Let's clean this up in one go.
>
> Also, given it's so easy to do this off-by-one error, can you please
> add a negative test validating that 33 tail calls are not allowed? I
> assume we have a positive test that allows exactly MAX_TAIL_CALL_CNT,
> but please double-check that as well.

Ok, I see that you've added this in your bpf tests patch set. Please
consider, additionally, implementing a similar test as part of
selftests/bpf (specifically in test_progs). We run test_progs
continuously in CI for every incoming patch/patchset, so it has much
higher chances of capturing any regressions.

I'm also thinking that this MAX_TAIL_CALL_CNT change should probably
go into the bpf-next tree. First, this off-by-one behavior was around
for a while and it doesn't cause serious issues, even if abused. But
on the other hand, it will make your tail call tests fail, when
applied into bpf-next without your change. So I think we should apply
both into bpf-next.

On a related topic, please don't forget to include the target kernel
tree for your patches: [PATCH bpf] or [PATCH bpf-next].


>
> I also wonder if it would make sense to convert these
> internal-but-sort-of-advertised constants like MAX_TAIL_CALL_CNT and
> BPF_COMPLEXITY_LIMIT_INSNS into enums so that they can be "discovered"
> from BTF. This should be discussed/attempted outside of this fix,
> though. Just bringing it up here.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-07-29 22:48             ` Andrii Nakryiko
@ 2021-08-01  8:37               ` Johan Almbladh
  2021-08-02 20:28                 ` Andrii Nakryiko
  0 siblings, 1 reply; 29+ messages in thread
From: Johan Almbladh @ 2021-08-01  8:37 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Tony Ambardar, Networking, bpf

On Fri, Jul 30, 2021 at 12:48 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Jul 29, 2021 at 3:29 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, Jul 29, 2021 at 2:38 PM Johan Almbladh
> > <johan.almbladh@anyfinetworks.com> wrote:
> > >
> > > On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> > > > I also checked arm/arm64 jit. I saw the following comments:
> > > >
> > > >          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > > >           *      goto out;
> > > >           * tail_call_cnt++;
> > > >           */
> > > >
> > > > Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> > > > for arm/arm64 jit?
> > >
> > > That wouldn't be unreasonable. I don't have an arm or arm64 setup
> > > available right now, but I can try to test it in qemu.
> >
> > On a brief check, there seems to be quite a mess in terms of the code
> > and comments.
> >
> > E.g., in arch/x86/net/bpf_jit_comp32.c:
> >
> >         /*
> >          * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> >          *     goto out;
> >          */
> >
> >                             ^^^^ here comment is wrong
> >
> >         [...]
> >
> >         /* cmp edx,hi */
> >         EMIT3(0x83, add_1reg(0xF8, IA32_EBX), hi);
> >         EMIT2(IA32_JNE, 3);
> >         /* cmp ecx,lo */
> >         EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);
> >
> >         /* ja out */
> >         EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));
> >
> >         ^^^ JAE is >=, right? But the comment says JA.
> >
> >
> > As for arch/x86/net/bpf_jit_comp.c, both comment and the code seem to
> > do > MAX_TAIL_CALL_CNT, but you are saying JIT is correct. What am I
> > missing?
> >
> > Can you please check all the places where MAX_TAIL_CALL_CNT is used
> > throughout the code? Let's clean this up in one go.
> >
> > Also, given it's so easy to do this off-by-one error, can you please
> > add a negative test validating that 33 tail calls are not allowed? I
> > assume we have a positive test that allows exactly MAX_TAIL_CALL_CNT,
> > but please double-check that as well.
>
> Ok, I see that you've added this in your bpf tests patch set. Please
> consider, additionally, implementing a similar test as part of
> selftests/bpf (specifically in test_progs). We run test_progs
> continuously in CI for every incoming patch/patchset, so it has much
> higher chances of capturing any regressions.
>
> I'm also thinking that this MAX_TAIL_CALL_CNT change should probably
> go into the bpf-next tree. First, this off-by-one behavior was around
> for a while and it doesn't cause serious issues, even if abused. But
> on the other hand, it will make your tail call tests fail, when
> applied into bpf-next without your change. So I think we should apply
> both into bpf-next.

I can confirm that the off-by-one behaviour is present on arm. Below
is the test output running on qemu. Test #4 calls itself recursively
and increments a counter each time, so the correct result should be 1
+ MAX_TAIL_CALL_CNT.

test_bpf: #0 Tail call leaf jited:1 71 PASS
test_bpf: #1 Tail call 2 jited:1 134 PASS
test_bpf: #2 Tail call 3 jited:1 164 PASS
test_bpf: #3 Tail call 4 jited:1 257 PASS
test_bpf: #4 Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
test_bpf: #5 Tail call error path, NULL target jited:1 114 PASS
test_bpf: #6 Tail call error path, index out of range jited:1 112 PASS
test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]

The MAX_TAIL_CALL_CNT constant is referenced in the following JITs.

arch/arm64/net/bpf_jit_comp.c
arch/arm/net/bpf_jit_32.c
arch/mips/net/ebpf_jit.c
arch/powerpc/net/bpf_jit_comp32.c
arch/powerpc/net/bpf_jit_comp64.c
arch/riscv/net/bpf_jit_comp32.c
arch/riscv/net/bpf_jit_comp64.c
arch/s390/net/bpf_jit_comp.c
arch/sparc/net/bpf_jit_comp_64.c
arch/x86/net/bpf_jit_comp32.c
arch/x86/net/bpf_jit_comp.c

The x86 JITs all pass the test, even though the comments are wrong.
The comments can easily be fixed of course. For JITs that have the
off-by-one behaviour, an easy fix would be to change all occurrences
of MAX_TAIL_CALL_CNT to MAX_TAIL_CALL_CNT - 1. We must first know
which JITs affected though.

The fix is easy but setting up the test is hard. It took me quite some
time to get the qemu/arm setup up and running. If the same has to be
done for arm64, mips64, powerpc, powerpc64, riscv32, risc64, sparc and
s390, I will need some help with this. If someone already has a
working setup for any of the systems, the test can be performed on
that.

Or perhaps there is a better way to do this? If I implement a similar
test in selftest/bpf, that would trigger the CI when the patch is
submitted and we will see which JITs we need to fix.

> On a related topic, please don't forget to include the target kernel
> tree for your patches: [PATCH bpf] or [PATCH bpf-next].

I'll add that! All patches I sent related to this are for the bpf-next tree.

Johan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-08-01  8:37               ` Johan Almbladh
@ 2021-08-02 20:28                 ` Andrii Nakryiko
  2021-08-05 14:37                   ` Johan Almbladh
  0 siblings, 1 reply; 29+ messages in thread
From: Andrii Nakryiko @ 2021-08-02 20:28 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Tony Ambardar, Networking, bpf

On Sun, Aug 1, 2021 at 1:38 AM Johan Almbladh
<johan.almbladh@anyfinetworks.com> wrote:
>
> On Fri, Jul 30, 2021 at 12:48 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, Jul 29, 2021 at 3:29 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Thu, Jul 29, 2021 at 2:38 PM Johan Almbladh
> > > <johan.almbladh@anyfinetworks.com> wrote:
> > > >
> > > > On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> > > > > I also checked arm/arm64 jit. I saw the following comments:
> > > > >
> > > > >          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > > > >           *      goto out;
> > > > >           * tail_call_cnt++;
> > > > >           */
> > > > >
> > > > > Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> > > > > for arm/arm64 jit?
> > > >
> > > > That wouldn't be unreasonable. I don't have an arm or arm64 setup
> > > > available right now, but I can try to test it in qemu.
> > >
> > > On a brief check, there seems to be quite a mess in terms of the code
> > > and comments.
> > >
> > > E.g., in arch/x86/net/bpf_jit_comp32.c:
> > >
> > >         /*
> > >          * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > >          *     goto out;
> > >          */
> > >
> > >                             ^^^^ here comment is wrong
> > >
> > >         [...]
> > >
> > >         /* cmp edx,hi */
> > >         EMIT3(0x83, add_1reg(0xF8, IA32_EBX), hi);
> > >         EMIT2(IA32_JNE, 3);
> > >         /* cmp ecx,lo */
> > >         EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);
> > >
> > >         /* ja out */
> > >         EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));
> > >
> > >         ^^^ JAE is >=, right? But the comment says JA.
> > >
> > >
> > > As for arch/x86/net/bpf_jit_comp.c, both comment and the code seem to
> > > do > MAX_TAIL_CALL_CNT, but you are saying JIT is correct. What am I
> > > missing?
> > >
> > > Can you please check all the places where MAX_TAIL_CALL_CNT is used
> > > throughout the code? Let's clean this up in one go.
> > >
> > > Also, given it's so easy to do this off-by-one error, can you please
> > > add a negative test validating that 33 tail calls are not allowed? I
> > > assume we have a positive test that allows exactly MAX_TAIL_CALL_CNT,
> > > but please double-check that as well.
> >
> > Ok, I see that you've added this in your bpf tests patch set. Please
> > consider, additionally, implementing a similar test as part of
> > selftests/bpf (specifically in test_progs). We run test_progs
> > continuously in CI for every incoming patch/patchset, so it has much
> > higher chances of capturing any regressions.
> >
> > I'm also thinking that this MAX_TAIL_CALL_CNT change should probably
> > go into the bpf-next tree. First, this off-by-one behavior was around
> > for a while and it doesn't cause serious issues, even if abused. But
> > on the other hand, it will make your tail call tests fail, when
> > applied into bpf-next without your change. So I think we should apply
> > both into bpf-next.
>
> I can confirm that the off-by-one behaviour is present on arm. Below
> is the test output running on qemu. Test #4 calls itself recursively
> and increments a counter each time, so the correct result should be 1
> + MAX_TAIL_CALL_CNT.
>
> test_bpf: #0 Tail call leaf jited:1 71 PASS
> test_bpf: #1 Tail call 2 jited:1 134 PASS
> test_bpf: #2 Tail call 3 jited:1 164 PASS
> test_bpf: #3 Tail call 4 jited:1 257 PASS
> test_bpf: #4 Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
> test_bpf: #5 Tail call error path, NULL target jited:1 114 PASS
> test_bpf: #6 Tail call error path, index out of range jited:1 112 PASS
> test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]
>
> The MAX_TAIL_CALL_CNT constant is referenced in the following JITs.
>
> arch/arm64/net/bpf_jit_comp.c
> arch/arm/net/bpf_jit_32.c
> arch/mips/net/ebpf_jit.c
> arch/powerpc/net/bpf_jit_comp32.c
> arch/powerpc/net/bpf_jit_comp64.c
> arch/riscv/net/bpf_jit_comp32.c
> arch/riscv/net/bpf_jit_comp64.c
> arch/s390/net/bpf_jit_comp.c
> arch/sparc/net/bpf_jit_comp_64.c
> arch/x86/net/bpf_jit_comp32.c
> arch/x86/net/bpf_jit_comp.c
>
> The x86 JITs all pass the test, even though the comments are wrong.
> The comments can easily be fixed of course. For JITs that have the
> off-by-one behaviour, an easy fix would be to change all occurrences
> of MAX_TAIL_CALL_CNT to MAX_TAIL_CALL_CNT - 1. We must first know
> which JITs affected though.

If you are going to fix ARM, please send a fix to comments for x86 as well.

>
> The fix is easy but setting up the test is hard. It took me quite some
> time to get the qemu/arm setup up and running. If the same has to be
> done for arm64, mips64, powerpc, powerpc64, riscv32, risc64, sparc and
> s390, I will need some help with this. If someone already has a
> working setup for any of the systems, the test can be performed on
> that.
>

Unfortunately, I myself have only x86-64 setup. libbpf
CI/kernel-patches CI we use to run all tests are running selftests
against x86-64 only as well. There was temporarily halted effort to
add s390x support as well, but it's not done yet. No one yet
volunteered to set up any other platforms and I don't know if that's
possible and how hard it would be to do within Github Actions platform
we are currently using.

So in short, I understand the challenges of testing all those
platforms and I don't really expect any single person to do all that
work. I've applied your fix, please follow up with ARM and comment
fixes.

> Or perhaps there is a better way to do this? If I implement a similar
> test in selftest/bpf, that would trigger the CI when the patch is
> submitted and we will see which JITs we need to fix.

The other nice benefit of implementing this in selftest/bpf, besides
continuous testing, is that you write it in C, which allows you to
express much more complicated logic more easily.

>
> > On a related topic, please don't forget to include the target kernel
> > tree for your patches: [PATCH bpf] or [PATCH bpf-next].
>
> I'll add that! All patches I sent related to this are for the bpf-next tree.
>
> Johan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-08-02 20:28                 ` Andrii Nakryiko
@ 2021-08-05 14:37                   ` Johan Almbladh
  2021-08-05 22:54                     ` Andrii Nakryiko
  0 siblings, 1 reply; 29+ messages in thread
From: Johan Almbladh @ 2021-08-05 14:37 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Tony Ambardar, Networking, bpf

On Mon, Aug 2, 2021 at 10:28 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Sun, Aug 1, 2021 at 1:38 AM Johan Almbladh
> <johan.almbladh@anyfinetworks.com> wrote:
> >
> > On Fri, Jul 30, 2021 at 12:48 AM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Thu, Jul 29, 2021 at 3:29 PM Andrii Nakryiko
> > > <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > On Thu, Jul 29, 2021 at 2:38 PM Johan Almbladh
> > > > <johan.almbladh@anyfinetworks.com> wrote:
> > > > >
> > > > > On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> > > > > > I also checked arm/arm64 jit. I saw the following comments:
> > > > > >
> > > > > >          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > > > > >           *      goto out;
> > > > > >           * tail_call_cnt++;
> > > > > >           */
> > > > > >
> > > > > > Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> > > > > > for arm/arm64 jit?
> > > > >
> > > > > That wouldn't be unreasonable. I don't have an arm or arm64 setup
> > > > > available right now, but I can try to test it in qemu.
> > > >
> > > > On a brief check, there seems to be quite a mess in terms of the code
> > > > and comments.
> > > >
> > > > E.g., in arch/x86/net/bpf_jit_comp32.c:
> > > >
> > > >         /*
> > > >          * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > > >          *     goto out;
> > > >          */
> > > >
> > > >                             ^^^^ here comment is wrong
> > > >
> > > >         [...]
> > > >
> > > >         /* cmp edx,hi */
> > > >         EMIT3(0x83, add_1reg(0xF8, IA32_EBX), hi);
> > > >         EMIT2(IA32_JNE, 3);
> > > >         /* cmp ecx,lo */
> > > >         EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);
> > > >
> > > >         /* ja out */
> > > >         EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));
> > > >
> > > >         ^^^ JAE is >=, right? But the comment says JA.
> > > >
> > > >
> > > > As for arch/x86/net/bpf_jit_comp.c, both comment and the code seem to
> > > > do > MAX_TAIL_CALL_CNT, but you are saying JIT is correct. What am I
> > > > missing?
> > > >
> > > > Can you please check all the places where MAX_TAIL_CALL_CNT is used
> > > > throughout the code? Let's clean this up in one go.
> > > >
> > > > Also, given it's so easy to do this off-by-one error, can you please
> > > > add a negative test validating that 33 tail calls are not allowed? I
> > > > assume we have a positive test that allows exactly MAX_TAIL_CALL_CNT,
> > > > but please double-check that as well.
> > >
> > > Ok, I see that you've added this in your bpf tests patch set. Please
> > > consider, additionally, implementing a similar test as part of
> > > selftests/bpf (specifically in test_progs). We run test_progs
> > > continuously in CI for every incoming patch/patchset, so it has much
> > > higher chances of capturing any regressions.
> > >
> > > I'm also thinking that this MAX_TAIL_CALL_CNT change should probably
> > > go into the bpf-next tree. First, this off-by-one behavior was around
> > > for a while and it doesn't cause serious issues, even if abused. But
> > > on the other hand, it will make your tail call tests fail, when
> > > applied into bpf-next without your change. So I think we should apply
> > > both into bpf-next.
> >
> > I can confirm that the off-by-one behaviour is present on arm. Below
> > is the test output running on qemu. Test #4 calls itself recursively
> > and increments a counter each time, so the correct result should be 1
> > + MAX_TAIL_CALL_CNT.
> >
> > test_bpf: #0 Tail call leaf jited:1 71 PASS
> > test_bpf: #1 Tail call 2 jited:1 134 PASS
> > test_bpf: #2 Tail call 3 jited:1 164 PASS
> > test_bpf: #3 Tail call 4 jited:1 257 PASS
> > test_bpf: #4 Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
> > test_bpf: #5 Tail call error path, NULL target jited:1 114 PASS
> > test_bpf: #6 Tail call error path, index out of range jited:1 112 PASS
> > test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]
> >
> > The MAX_TAIL_CALL_CNT constant is referenced in the following JITs.
> >
> > arch/arm64/net/bpf_jit_comp.c
> > arch/arm/net/bpf_jit_32.c
> > arch/mips/net/ebpf_jit.c
> > arch/powerpc/net/bpf_jit_comp32.c
> > arch/powerpc/net/bpf_jit_comp64.c
> > arch/riscv/net/bpf_jit_comp32.c
> > arch/riscv/net/bpf_jit_comp64.c
> > arch/s390/net/bpf_jit_comp.c
> > arch/sparc/net/bpf_jit_comp_64.c
> > arch/x86/net/bpf_jit_comp32.c
> > arch/x86/net/bpf_jit_comp.c
> >
> > The x86 JITs all pass the test, even though the comments are wrong.
> > The comments can easily be fixed of course. For JITs that have the
> > off-by-one behaviour, an easy fix would be to change all occurrences
> > of MAX_TAIL_CALL_CNT to MAX_TAIL_CALL_CNT - 1. We must first know
> > which JITs affected though.
>
> If you are going to fix ARM, please send a fix to comments for x86 as well.
>
> >
> > The fix is easy but setting up the test is hard. It took me quite some
> > time to get the qemu/arm setup up and running. If the same has to be
> > done for arm64, mips64, powerpc, powerpc64, riscv32, risc64, sparc and
> > s390, I will need some help with this. If someone already has a
> > working setup for any of the systems, the test can be performed on
> > that.
> >
>
> Unfortunately, I myself have only x86-64 setup. libbpf
> CI/kernel-patches CI we use to run all tests are running selftests
> against x86-64 only as well. There was temporarily halted effort to
> add s390x support as well, but it's not done yet. No one yet
> volunteered to set up any other platforms and I don't know if that's
> possible and how hard it would be to do within Github Actions platform
> we are currently using.
>
> So in short, I understand the challenges of testing all those
> platforms and I don't really expect any single person to do all that
> work. I've applied your fix, please follow up with ARM and comment
> fixes.

Thanks! I will fix the ARM JIT and the comments, then submit an
updated patch set for the test suite with changes after Yonghong's
review.

My current test setup can easily cross-compile the kernel with busybox
as userspace. However, getting it to run on QEMU has required some
amount of detective work. Every platforms seems to be different in
terms of what to boot (vmlinux, zImage, bzImage), how to boot it (dtb,
bios, uBoot requirements) and QEMU vs Kconfig settings. Currently I
can run i386, x86_64, MIPS, MIPS64 and ARM under QEMU. I can verify
and if needed fix the JIT on some of the other platforms as well, if I
can get it to run on QEMU with a reasonable effort. However, I cannot
build for RISC-V since I don't have a toolchain for that. I build my
toolchains with crosstool-ng using libmusl, and the latter does not
currently support RISC-V.

As a side note, I think having a QEMU-compatible defconfig for each
platform would make it easier to test arch-specific code. It could
also be a first step towards fully automated arch-specific CI.

Sorry for being a bit slow to respond. I am currently travelling with
only sporadic access to e-mail.

>
> > Or perhaps there is a better way to do this? If I implement a similar
> > test in selftest/bpf, that would trigger the CI when the patch is
> > submitted and we will see which JITs we need to fix.
>
> The other nice benefit of implementing this in selftest/bpf, besides
> continuous testing, is that you write it in C, which allows you to
> express much more complicated logic more easily.
>
> >
> > > On a related topic, please don't forget to include the target kernel
> > > tree for your patches: [PATCH bpf] or [PATCH bpf-next].
> >
> > I'll add that! All patches I sent related to this are for the bpf-next tree.
> >
> > Johan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] bpf: Fix off-by-one in tail call count limiting
  2021-08-05 14:37                   ` Johan Almbladh
@ 2021-08-05 22:54                     ` Andrii Nakryiko
  0 siblings, 0 replies; 29+ messages in thread
From: Andrii Nakryiko @ 2021-08-05 22:54 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Tony Ambardar, Networking, bpf

On Thu, Aug 5, 2021 at 7:38 AM Johan Almbladh
<johan.almbladh@anyfinetworks.com> wrote:
>
> On Mon, Aug 2, 2021 at 10:28 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Sun, Aug 1, 2021 at 1:38 AM Johan Almbladh
> > <johan.almbladh@anyfinetworks.com> wrote:
> > >
> > > On Fri, Jul 30, 2021 at 12:48 AM Andrii Nakryiko
> > > <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > On Thu, Jul 29, 2021 at 3:29 PM Andrii Nakryiko
> > > > <andrii.nakryiko@gmail.com> wrote:
> > > > >
> > > > > On Thu, Jul 29, 2021 at 2:38 PM Johan Almbladh
> > > > > <johan.almbladh@anyfinetworks.com> wrote:
> > > > > >
> > > > > > On Wed, Jul 28, 2021 at 9:13 PM Yonghong Song <yhs@fb.com> wrote:
> > > > > > > I also checked arm/arm64 jit. I saw the following comments:
> > > > > > >
> > > > > > >          /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > > > > > >           *      goto out;
> > > > > > >           * tail_call_cnt++;
> > > > > > >           */
> > > > > > >
> > > > > > > Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
> > > > > > > for arm/arm64 jit?
> > > > > >
> > > > > > That wouldn't be unreasonable. I don't have an arm or arm64 setup
> > > > > > available right now, but I can try to test it in qemu.
> > > > >
> > > > > On a brief check, there seems to be quite a mess in terms of the code
> > > > > and comments.
> > > > >
> > > > > E.g., in arch/x86/net/bpf_jit_comp32.c:
> > > > >
> > > > >         /*
> > > > >          * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
> > > > >          *     goto out;
> > > > >          */
> > > > >
> > > > >                             ^^^^ here comment is wrong
> > > > >
> > > > >         [...]
> > > > >
> > > > >         /* cmp edx,hi */
> > > > >         EMIT3(0x83, add_1reg(0xF8, IA32_EBX), hi);
> > > > >         EMIT2(IA32_JNE, 3);
> > > > >         /* cmp ecx,lo */
> > > > >         EMIT3(0x83, add_1reg(0xF8, IA32_ECX), lo);
> > > > >
> > > > >         /* ja out */
> > > > >         EMIT2(IA32_JAE, jmp_label(jmp_label1, 2));
> > > > >
> > > > >         ^^^ JAE is >=, right? But the comment says JA.
> > > > >
> > > > >
> > > > > As for arch/x86/net/bpf_jit_comp.c, both comment and the code seem to
> > > > > do > MAX_TAIL_CALL_CNT, but you are saying JIT is correct. What am I
> > > > > missing?
> > > > >
> > > > > Can you please check all the places where MAX_TAIL_CALL_CNT is used
> > > > > throughout the code? Let's clean this up in one go.
> > > > >
> > > > > Also, given it's so easy to do this off-by-one error, can you please
> > > > > add a negative test validating that 33 tail calls are not allowed? I
> > > > > assume we have a positive test that allows exactly MAX_TAIL_CALL_CNT,
> > > > > but please double-check that as well.
> > > >
> > > > Ok, I see that you've added this in your bpf tests patch set. Please
> > > > consider, additionally, implementing a similar test as part of
> > > > selftests/bpf (specifically in test_progs). We run test_progs
> > > > continuously in CI for every incoming patch/patchset, so it has much
> > > > higher chances of capturing any regressions.
> > > >
> > > > I'm also thinking that this MAX_TAIL_CALL_CNT change should probably
> > > > go into the bpf-next tree. First, this off-by-one behavior was around
> > > > for a while and it doesn't cause serious issues, even if abused. But
> > > > on the other hand, it will make your tail call tests fail, when
> > > > applied into bpf-next without your change. So I think we should apply
> > > > both into bpf-next.
> > >
> > > I can confirm that the off-by-one behaviour is present on arm. Below
> > > is the test output running on qemu. Test #4 calls itself recursively
> > > and increments a counter each time, so the correct result should be 1
> > > + MAX_TAIL_CALL_CNT.
> > >
> > > test_bpf: #0 Tail call leaf jited:1 71 PASS
> > > test_bpf: #1 Tail call 2 jited:1 134 PASS
> > > test_bpf: #2 Tail call 3 jited:1 164 PASS
> > > test_bpf: #3 Tail call 4 jited:1 257 PASS
> > > test_bpf: #4 Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
> > > test_bpf: #5 Tail call error path, NULL target jited:1 114 PASS
> > > test_bpf: #6 Tail call error path, index out of range jited:1 112 PASS
> > > test_bpf: test_tail_calls: Summary: 6 PASSED, 1 FAILED, [7/7 JIT'ed]
> > >
> > > The MAX_TAIL_CALL_CNT constant is referenced in the following JITs.
> > >
> > > arch/arm64/net/bpf_jit_comp.c
> > > arch/arm/net/bpf_jit_32.c
> > > arch/mips/net/ebpf_jit.c
> > > arch/powerpc/net/bpf_jit_comp32.c
> > > arch/powerpc/net/bpf_jit_comp64.c
> > > arch/riscv/net/bpf_jit_comp32.c
> > > arch/riscv/net/bpf_jit_comp64.c
> > > arch/s390/net/bpf_jit_comp.c
> > > arch/sparc/net/bpf_jit_comp_64.c
> > > arch/x86/net/bpf_jit_comp32.c
> > > arch/x86/net/bpf_jit_comp.c
> > >
> > > The x86 JITs all pass the test, even though the comments are wrong.
> > > The comments can easily be fixed of course. For JITs that have the
> > > off-by-one behaviour, an easy fix would be to change all occurrences
> > > of MAX_TAIL_CALL_CNT to MAX_TAIL_CALL_CNT - 1. We must first know
> > > which JITs affected though.
> >
> > If you are going to fix ARM, please send a fix to comments for x86 as well.
> >
> > >
> > > The fix is easy but setting up the test is hard. It took me quite some
> > > time to get the qemu/arm setup up and running. If the same has to be
> > > done for arm64, mips64, powerpc, powerpc64, riscv32, risc64, sparc and
> > > s390, I will need some help with this. If someone already has a
> > > working setup for any of the systems, the test can be performed on
> > > that.
> > >
> >
> > Unfortunately, I myself have only x86-64 setup. libbpf
> > CI/kernel-patches CI we use to run all tests are running selftests
> > against x86-64 only as well. There was temporarily halted effort to
> > add s390x support as well, but it's not done yet. No one yet
> > volunteered to set up any other platforms and I don't know if that's
> > possible and how hard it would be to do within Github Actions platform
> > we are currently using.
> >
> > So in short, I understand the challenges of testing all those
> > platforms and I don't really expect any single person to do all that
> > work. I've applied your fix, please follow up with ARM and comment
> > fixes.
>
> Thanks! I will fix the ARM JIT and the comments, then submit an
> updated patch set for the test suite with changes after Yonghong's
> review.
>
> My current test setup can easily cross-compile the kernel with busybox
> as userspace. However, getting it to run on QEMU has required some
> amount of detective work. Every platforms seems to be different in
> terms of what to boot (vmlinux, zImage, bzImage), how to boot it (dtb,
> bios, uBoot requirements) and QEMU vs Kconfig settings. Currently I
> can run i386, x86_64, MIPS, MIPS64 and ARM under QEMU. I can verify

At some point I tried to setup MIPS and ARM qemu and eventually just
gave up. So if you have it figured out, it would be nice to document
and share the process somewhere, for future needs.

> and if needed fix the JIT on some of the other platforms as well, if I
> can get it to run on QEMU with a reasonable effort. However, I cannot
> build for RISC-V since I don't have a toolchain for that. I build my
> toolchains with crosstool-ng using libmusl, and the latter does not
> currently support RISC-V.
>
> As a side note, I think having a QEMU-compatible defconfig for each
> platform would make it easier to test arch-specific code. It could
> also be a first step towards fully automated arch-specific CI.
>
> Sorry for being a bit slow to respond. I am currently travelling with
> only sporadic access to e-mail.
>
> >
> > > Or perhaps there is a better way to do this? If I implement a similar
> > > test in selftest/bpf, that would trigger the CI when the patch is
> > > submitted and we will see which JITs we need to fix.
> >
> > The other nice benefit of implementing this in selftest/bpf, besides
> > continuous testing, is that you write it in C, which allows you to
> > express much more complicated logic more easily.
> >
> > >
> > > > On a related topic, please don't forget to include the target kernel
> > > > tree for your patches: [PATCH bpf] or [PATCH bpf-next].
> > >
> > > I'll add that! All patches I sent related to this are for the bpf-next tree.
> > >
> > > Johan

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-08-05 22:54 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-26  8:17 [RFC PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 01/14] bpf/tests: add BPF_JMP32 test cases Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 02/14] bpf/tests: add BPF_MOV tests for zero and sign extension Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 03/14] bpf/tests: fix typos in test case descriptions Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 04/14] bpf/tests: add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 05/14] bpf/tests: add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 06/14] bpf/tests: add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 07/14] bpf/tests: add more ALU64 BPF_MUL tests Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 08/14] bpf/tests: add tests for ALU operations implemented with function calls Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 09/14] bpf/tests: add word-order tests for load/store of double words Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 10/14] bpf/tests: add branch conversion JIT test Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 11/14] bpf/tests: add test for 32-bit context pointer argument passing Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 12/14] bpf/tests: add tests for atomic operations Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 13/14] bpf/tests: add tests for BPF_CMPXCHG Johan Almbladh
2021-07-26  8:17 ` [RFC PATCH 14/14] bpf/tests: add tail call test suite Johan Almbladh
2021-07-26 11:07   ` kernel test robot
2021-07-26 21:33   ` kernel test robot
2021-07-26 22:53 ` [RFC PATCH 00/14] bpf/tests: Extend the eBPF " Andrii Nakryiko
2021-07-28  8:27   ` Daniel Borkmann
2021-07-28 12:15     ` Johan Almbladh
2021-07-28 16:47     ` [PATCH] bpf: Fix off-by-one in tail call count limiting Johan Almbladh
2021-07-28 19:13       ` Yonghong Song
2021-07-29 21:37         ` Johan Almbladh
2021-07-29 22:29           ` Andrii Nakryiko
2021-07-29 22:48             ` Andrii Nakryiko
2021-08-01  8:37               ` Johan Almbladh
2021-08-02 20:28                 ` Andrii Nakryiko
2021-08-05 14:37                   ` Johan Almbladh
2021-08-05 22:54                     ` Andrii Nakryiko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.