All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/14] bpf/tests: Extend the eBPF test suite
@ 2021-07-28 17:04 Johan Almbladh
  2021-07-28 17:04 ` [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases Johan Almbladh
                   ` (13 more replies)
  0 siblings, 14 replies; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch set extends the eBPF test suite in the test_bpf kernel module
to add more extensive tests of corner cases and new tests for operations
not previously covered.

An RFC patch set was previously submitted.

Link: https://lore.kernel.org/bpf/20210726081738.1833704-1-johan.almbladh@anyfinetworks.com/

Changes from RFC patch set:

* Changed tail call count limit test to comply with the new behavior of
  the interpreter, i.e. at most MAX_TAIL_CALL_CNT tail calls are
  permitted.

* Fixed an uninitilized variable in tail call test error path, found by
  the Kernel test robot. Also fixed a warning due to pointer-to-u32 cast
  in IMM64 load instructions.

The checkpatch.pl script reports an error for the multi-line macro in
patch 14/14 ("bpf/tests: Add tail call test suite"). However, it cannot
be enclosed in parenthesis as suggested since it is an array element
initialization, similar the existing BPF_LD_IMM64() helper macro. It can
be replaced, but I do think the code and the intent is clearer this way.

Thanks,
Johan

Johan Almbladh (14):
  bpf/tests: Add BPF_JMP32 test cases
  bpf/tests: Add BPF_MOV tests for zero and sign extension
  bpf/tests: Fix typos in test case descriptions
  bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations
  bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH
  bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64
  bpf/tests: Add more ALU64 BPF_MUL tests
  bpf/tests: Add tests for ALU operations implemented with function
    calls
  bpf/tests: Add word-order tests for load/store of double words
  bpf/tests: Add branch conversion JIT test
  bpf/tests: Add test for 32-bit context pointer argument passing
  bpf/tests: Add tests for atomic operations
  bpf/tests: Add tests for BPF_CMPXCHG
  bpf/tests: Add tail call test suite

 lib/test_bpf.c | 2732 +++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 2475 insertions(+), 257 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 22:31   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 02/14] bpf/tests: Add BPF_MOV tests for zero and sign extension Johan Almbladh
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

An eBPF JIT may implement JMP32 operations in a different way than JMP,
especially on 32-bit architectures. This patch adds a series of tests
for JMP32 operations, mainly for testing JITs.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 511 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 511 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index f6d5d30d01bf..bfac033db590 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4398,6 +4398,517 @@ static struct bpf_test tests[] = {
 		{ { 0, 4134 } },
 		.fill_helper = bpf_fill_stxdw,
 	},
+	/* BPF_JMP32 | BPF_JEQ | BPF_K */
+	{
+		"JMP32_JEQ_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 321, 1),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JEQ_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 12345678),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 12345678 & 0xffff, 1),
+			BPF_JMP32_IMM(BPF_JEQ, R0, 12345678, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 12345678 } }
+	},
+	{
+		"JMP32_JEQ_K: negative immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JEQ, R0,  123, 1),
+			BPF_JMP32_IMM(BPF_JEQ, R0, -123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	/* BPF_JMP32 | BPF_JEQ | BPF_X */
+	{
+		"JMP32_JEQ_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 1234),
+			BPF_ALU32_IMM(BPF_MOV, R1, 4321),
+			BPF_JMP32_REG(BPF_JEQ, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 1234),
+			BPF_JMP32_REG(BPF_JEQ, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1234 } }
+	},
+	/* BPF_JMP32 | BPF_JNE | BPF_K */
+	{
+		"JMP32_JNE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JNE, R0, 123, 1),
+			BPF_JMP32_IMM(BPF_JNE, R0, 321, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JNE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 12345678),
+			BPF_JMP32_IMM(BPF_JNE, R0, 12345678, 1),
+			BPF_JMP32_IMM(BPF_JNE, R0, 12345678 & 0xffff, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 12345678 } }
+	},
+	{
+		"JMP32_JNE_K: negative immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JNE, R0, -123, 1),
+			BPF_JMP32_IMM(BPF_JNE, R0,  123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	/* BPF_JMP32 | BPF_JNE | BPF_X */
+	{
+		"JMP32_JNE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 1234),
+			BPF_ALU32_IMM(BPF_MOV, R1, 1234),
+			BPF_JMP32_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 4321),
+			BPF_JMP32_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1234 } }
+	},
+	/* BPF_JMP32 | BPF_JSET | BPF_K */
+	{
+		"JMP32_JSET_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_JMP32_IMM(BPF_JSET, R0, 2, 1),
+			BPF_JMP32_IMM(BPF_JSET, R0, 3, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"JMP32_JSET_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x40000000),
+			BPF_JMP32_IMM(BPF_JSET, R0, 0x3fffffff, 1),
+			BPF_JMP32_IMM(BPF_JSET, R0, 0x60000000, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x40000000 } }
+	},
+	{
+		"JMP32_JSET_K: negative immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSET, R0, -1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	/* BPF_JMP32 | BPF_JSET | BPF_X */
+	{
+		"JMP32_JSET_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 8),
+			BPF_ALU32_IMM(BPF_MOV, R1, 7),
+			BPF_JMP32_REG(BPF_JSET, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 8 | 2),
+			BPF_JMP32_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 8 } }
+	},
+	/* BPF_JMP32 | BPF_JGT | BPF_K */
+	{
+		"JMP32_JGT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JGT, R0, 123, 1),
+			BPF_JMP32_IMM(BPF_JGT, R0, 122, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JGT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JGT, R0, 0xffffffff, 1),
+			BPF_JMP32_IMM(BPF_JGT, R0, 0xfffffffd, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JGT | BPF_X */
+	{
+		"JMP32_JGT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
+			BPF_JMP32_REG(BPF_JGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
+			BPF_JMP32_REG(BPF_JGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JGE | BPF_K */
+	{
+		"JMP32_JGE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JGE, R0, 124, 1),
+			BPF_JMP32_IMM(BPF_JGE, R0, 123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JGE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JGE, R0, 0xffffffff, 1),
+			BPF_JMP32_IMM(BPF_JGE, R0, 0xfffffffe, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JGE | BPF_X */
+	{
+		"JMP32_JGE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
+			BPF_JMP32_REG(BPF_JGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffe),
+			BPF_JMP32_REG(BPF_JGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLT | BPF_K */
+	{
+		"JMP32_JLT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JLT, R0, 123, 1),
+			BPF_JMP32_IMM(BPF_JLT, R0, 124, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JLT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JLT, R0, 0xfffffffd, 1),
+			BPF_JMP32_IMM(BPF_JLT, R0, 0xffffffff, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLT | BPF_X */
+	{
+		"JMP32_JLT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
+			BPF_JMP32_REG(BPF_JLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
+			BPF_JMP32_REG(BPF_JLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLE | BPF_K */
+	{
+		"JMP32_JLE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 123),
+			BPF_JMP32_IMM(BPF_JLE, R0, 122, 1),
+			BPF_JMP32_IMM(BPF_JLE, R0, 123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 123 } }
+	},
+	{
+		"JMP32_JLE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_JMP32_IMM(BPF_JLE, R0, 0xfffffffd, 1),
+			BPF_JMP32_IMM(BPF_JLE, R0, 0xfffffffe, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JLE | BPF_X */
+	{
+		"JMP32_JLE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
+			BPF_JMP32_REG(BPF_JLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffe),
+			BPF_JMP32_REG(BPF_JLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfffffffe } }
+	},
+	/* BPF_JMP32 | BPF_JSGT | BPF_K */
+	{
+		"JMP32_JSGT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -123, 1),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -124, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSGT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -12345678, 1),
+			BPF_JMP32_IMM(BPF_JSGT, R0, -12345679, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSGT | BPF_X */
+	{
+		"JMP32_JSGT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345679),
+			BPF_JMP32_REG(BPF_JSGT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSGE | BPF_K */
+	{
+		"JMP32_JSGE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -122, 1),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSGE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -12345677, 1),
+			BPF_JMP32_IMM(BPF_JSGE, R0, -12345678, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSGE | BPF_X */
+	{
+		"JMP32_JSGE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345677),
+			BPF_JMP32_REG(BPF_JSGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSGE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLT | BPF_K */
+	{
+		"JMP32_JSLT_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -123, 1),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -122, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSLT_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -12345678, 1),
+			BPF_JMP32_IMM(BPF_JSLT, R0, -12345677, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLT | BPF_X */
+	{
+		"JMP32_JSLT_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345677),
+			BPF_JMP32_REG(BPF_JSLT, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLE | BPF_K */
+	{
+		"JMP32_JSLE_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -124, 1),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -123, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"JMP32_JSLE_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -12345679, 1),
+			BPF_JMP32_IMM(BPF_JSLE, R0, -12345678, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
+	/* BPF_JMP32 | BPF_JSLE | BPF_K */
+	{
+		"JMP32_JSLE_X",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345679),
+			BPF_JMP32_REG(BPF_JSLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
+			BPF_JMP32_REG(BPF_JSLE, R0, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -12345678 } }
+	},
 	/* BPF_JMP | BPF_EXIT */
 	{
 		"JMP_EXIT",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 02/14] bpf/tests: Add BPF_MOV tests for zero and sign extension
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
  2021-07-28 17:04 ` [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 22:36   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 03/14] bpf/tests: Fix typos in test case descriptions Johan Almbladh
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Tests for ALU32 and ALU64 MOV with different sizes of the immediate
value. Depending on the immediate field width of the native CPU
instructions, a JIT may generate code differently depending on the
immediate value. Test that zero or sign extension is performed as
expected. Mainly for JIT testing.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 84 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index bfac033db590..9e232acddce8 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -2360,6 +2360,48 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU_MOV_K: small negative",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"ALU_MOV_K: small negative zero extension",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU_MOV_K: large negative",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123456789),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123456789 } }
+	},
+	{
+		"ALU_MOV_K: large negative zero extension",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -123456789),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
 	{
 		"ALU64_MOV_K: dst = 2",
 		.u.insns_int = {
@@ -2412,6 +2454,48 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_MOV_K: small negative",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123 } }
+	},
+	{
+		"ALU64_MOV_K: small negative sign extension",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xffffffff } }
+	},
+	{
+		"ALU64_MOV_K: large negative",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123456789),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -123456789 } }
+	},
+	{
+		"ALU64_MOV_K: large negative sign extension",
+		.u.insns_int = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -123456789),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xffffffff } }
+	},
 	/* BPF_ALU | BPF_ADD | BPF_X */
 	{
 		"ALU_ADD_X: 1 + 2 = 3",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 03/14] bpf/tests: Fix typos in test case descriptions
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
  2021-07-28 17:04 ` [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases Johan Almbladh
  2021-07-28 17:04 ` [PATCH 02/14] bpf/tests: Add BPF_MOV tests for zero and sign extension Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 22:43   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 04/14] bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch corrects the test description in a number of cases where
the description differed from what was actually tested and expected.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 9e232acddce8..9695d13812df 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -3537,7 +3537,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffffffff } },
 	},
 	{
-		"ALU64_AND_K: 0x0000ffffffff0000 & 0x0 = 0x0000ffff00000000",
+		"ALU64_AND_K: 0x0000ffffffff0000 & 0x0 = 0x0000000000000000",
 		.u.insns_int = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000000000000000LL),
@@ -3553,7 +3553,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 0x1 } },
 	},
 	{
-		"ALU64_AND_K: 0x0000ffffffff0000 & -1 = 0x0000ffffffffffff",
+		"ALU64_AND_K: 0x0000ffffffff0000 & -1 = 0x0000ffffffff0000",
 		.u.insns_int = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
@@ -3679,7 +3679,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffffffff } },
 	},
 	{
-		"ALU64_OR_K: 0x0000ffffffff0000 | 0x0 = 0x0000ffff00000000",
+		"ALU64_OR_K: 0x0000ffffffff0000 | 0x0 = 0x0000ffffffff0000",
 		.u.insns_int = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
@@ -3810,7 +3810,7 @@ static struct bpf_test tests[] = {
 		{ { 0, 3 } },
 	},
 	{
-		"ALU64_XOR_K: 1 & 0xffffffff = 0xfffffffe",
+		"ALU64_XOR_K: 1 ^ 0xffffffff = 0xfffffffe",
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 1),
 			BPF_ALU64_IMM(BPF_XOR, R0, 0xffffffff),
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 04/14] bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (2 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 03/14] bpf/tests: Fix typos in test case descriptions Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 22:53   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 05/14] bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds tests of BPF_AND, BPF_OR and BPF_XOR with different
magnitude of the immediate value. Mainly checking 32-bit JIT sub-word
handling and zero/sign extension.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 210 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 210 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 9695d13812df..67e7de776c12 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -3514,6 +3514,44 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } },
 	},
+	{
+		"ALU_AND_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_AND, R0, 15),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 4 } }
+	},
+	{
+		"ALU_AND_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xf1f2f3f4),
+			BPF_ALU32_IMM(BPF_AND, R0, 0xafbfcfdf),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xa1b2c3d4 } }
+	},
+	{
+		"ALU_AND_K: Zero extension",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x0000000080a0c0e0LL),
+			BPF_ALU32_IMM(BPF_AND, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	{
 		"ALU64_AND_K: 3 & 2 = 2",
 		.u.insns_int = {
@@ -3584,6 +3622,38 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_AND_K: Sign extension 1",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x00000000090b0d0fLL),
+			BPF_ALU64_IMM(BPF_AND, R0, 0x0f0f0f0f),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"ALU64_AND_K: Sign extension 2",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x0123456780a0c0e0LL),
+			BPF_ALU64_IMM(BPF_AND, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	/* BPF_ALU | BPF_OR | BPF_X */
 	{
 		"ALU_OR_X: 1 | 2 = 3",
@@ -3656,6 +3726,44 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } },
 	},
+	{
+		"ALU_OR_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_OR, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01020305 } }
+	},
+	{
+		"ALU_OR_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_OR, R0, 0xa0b0c0d0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xa1b2c3d4 } }
+	},
+	{
+		"ALU_OR_K: Zero extension",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x00000000f9fbfdffLL),
+			BPF_ALU32_IMM(BPF_OR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	{
 		"ALU64_OR_K: 1 | 2 = 3",
 		.u.insns_int = {
@@ -3726,6 +3834,38 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_OR_K: Sign extension 1",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x012345678fafcfefLL),
+			BPF_ALU64_IMM(BPF_OR, R0, 0x0f0f0f0f),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"ALU64_OR_K: Sign extension 2",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0xfffffffff9fbfdffLL),
+			BPF_ALU64_IMM(BPF_OR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	/* BPF_ALU | BPF_XOR | BPF_X */
 	{
 		"ALU_XOR_X: 5 ^ 6 = 3",
@@ -3798,6 +3938,44 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xfffffffe } },
 	},
+	{
+		"ALU_XOR_K: Small immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01020304),
+			BPF_ALU32_IMM(BPF_XOR, R0, 15),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x0102030b } }
+	},
+	{
+		"ALU_XOR_K: Large immediate",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0xf1f2f3f4),
+			BPF_ALU32_IMM(BPF_XOR, R0, 0xafbfcfdf),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x5e4d3c2b } }
+	},
+	{
+		"ALU_XOR_K: Zero extension",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x00000000795b3d1fLL),
+			BPF_ALU32_IMM(BPF_XOR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	{
 		"ALU64_XOR_K: 5 ^ 6 = 3",
 		.u.insns_int = {
@@ -3868,6 +4046,38 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_XOR_K: Sign extension 1",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0x0123456786a4c2e0LL),
+			BPF_ALU64_IMM(BPF_XOR, R0, 0x0f0f0f0f),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"ALU64_XOR_K: Sign extension 2",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_LD_IMM64(R1, 0xfedcba98795b3d1fLL),
+			BPF_ALU64_IMM(BPF_XOR, R0, 0xf0f0f0f0),
+			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_MOV32_IMM(R0, 2),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
 	/* BPF_ALU | BPF_LSH | BPF_X */
 	{
 		"ALU_LSH_X: 1 << 1 = 2",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 05/14] bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (3 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 04/14] bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 22:57   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds more tests of ALU32 shift operations BPF_LSH and BPF_RSH,
including the special case of a zero immediate. Also add corresponding
BPF_ARSH tests which were missing for ALU32.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 102 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 67e7de776c12..ef75dbf53ec2 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4103,6 +4103,18 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU_LSH_X: 0x12345678 << 12 = 0x45678000",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU32_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x45678000 } }
+	},
 	{
 		"ALU64_LSH_X: 1 << 1 = 2",
 		.u.insns_int = {
@@ -4150,6 +4162,28 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU_LSH_K: 0x12345678 << 12 = 0x45678000",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_LSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x45678000 } }
+	},
+	{
+		"ALU_LSH_K: 0x12345678 << 0 = 0x12345678",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_LSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x12345678 } }
+	},
 	{
 		"ALU64_LSH_K: 1 << 1 = 2",
 		.u.insns_int = {
@@ -4197,6 +4231,18 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU_RSH_X: 0x12345678 >> 20 = 0x123",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_MOV, R1, 20),
+			BPF_ALU32_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x123 } }
+	},
 	{
 		"ALU64_RSH_X: 2 >> 1 = 1",
 		.u.insns_int = {
@@ -4244,6 +4290,28 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU_RSH_K: 0x12345678 >> 20 = 0x123",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_RSH, R0, 20),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x123 } }
+	},
+	{
+		"ALU_RSH_K: 0x12345678 >> 0 = 0x12345678",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x12345678),
+			BPF_ALU32_IMM(BPF_RSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x12345678 } }
+	},
 	{
 		"ALU64_RSH_K: 2 >> 1 = 1",
 		.u.insns_int = {
@@ -4267,6 +4335,18 @@ static struct bpf_test tests[] = {
 		{ { 0, 1 } },
 	},
 	/* BPF_ALU | BPF_ARSH | BPF_X */
+	{
+		"ALU32_ARSH_X: -1234 >> 7 = -10",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -1234),
+			BPF_ALU32_IMM(BPF_MOV, R1, 7),
+			BPF_ALU32_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -10 } }
+	},
 	{
 		"ALU_ARSH_X: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
@@ -4280,6 +4360,28 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffff00ff } },
 	},
 	/* BPF_ALU | BPF_ARSH | BPF_K */
+	{
+		"ALU32_ARSH_K: -1234 >> 7 = -10",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -1234),
+			BPF_ALU32_IMM(BPF_ARSH, R0, 7),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -10 } }
+	},
+	{
+		"ALU32_ARSH_K: -1234 >> 0 = -1234",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, -1234),
+			BPF_ALU32_IMM(BPF_ARSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1234 } }
+	},
 	{
 		"ALU_ARSH_K: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (4 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 05/14] bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 23:30   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests Johan Almbladh
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds a number of tests for BPF_LSH, BPF_RSH amd BPF_ARSH
ALU64 operations with values that may trigger different JIT code paths.
Mainly testing 32-bit JITs that implement ALU64 operations with two
32-bit CPU registers per operand.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 544 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 542 insertions(+), 2 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ef75dbf53ec2..b930fa35b9ef 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4139,6 +4139,106 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU64_LSH_X: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xbcdef000 } }
+	},
+	{
+		"ALU64_LSH_X: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x3456789a } }
+	},
+	{
+		"ALU64_LSH_X: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_X: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x9abcdef0 } }
+	},
+	{
+		"ALU64_LSH_X: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_X: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_LSH_X: Zero shift, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_LSH_X: Zero shift, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_LSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } }
+	},
 	/* BPF_ALU | BPF_LSH | BPF_K */
 	{
 		"ALU_LSH_K: 1 << 1 = 2",
@@ -4206,6 +4306,86 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x80000000 } },
 	},
+	{
+		"ALU64_LSH_K: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xbcdef000 } }
+	},
+	{
+		"ALU64_LSH_K: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 12),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x3456789a } }
+	},
+	{
+		"ALU64_LSH_K: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_K: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 36),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x9abcdef0 } }
+	},
+	{
+		"ALU64_LSH_K: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_LSH_K: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 32),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_LSH_K: Zero shift",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_LSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
 	/* BPF_ALU | BPF_RSH | BPF_X */
 	{
 		"ALU_RSH_X: 2 >> 1 = 1",
@@ -4267,6 +4447,106 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU64_RSH_X: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_RSH_X: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x00081234 } }
+	},
+	{
+		"ALU64_RSH_X: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x08123456 } }
+	},
+	{
+		"ALU64_RSH_X: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_X: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_RSH_X: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_X: Zero shift, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_RSH_X: Zero shift, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_RSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
 	/* BPF_ALU | BPF_RSH | BPF_K */
 	{
 		"ALU_RSH_K: 2 >> 1 = 1",
@@ -4334,6 +4614,86 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{
+		"ALU64_RSH_K: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_RSH_K: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 12),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x00081234 } }
+	},
+	{
+		"ALU64_RSH_K: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x08123456 } }
+	},
+	{
+		"ALU64_RSH_K: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 36),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_K: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_RSH_K: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } }
+	},
+	{
+		"ALU64_RSH_K: Zero shift",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
 	/* BPF_ALU | BPF_ARSH | BPF_X */
 	{
 		"ALU32_ARSH_X: -1234 >> 7 = -10",
@@ -4348,7 +4708,7 @@ static struct bpf_test tests[] = {
 		{ { 0, -10 } }
 	},
 	{
-		"ALU_ARSH_X: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
+		"ALU64_ARSH_X: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 0xff00ff0000000000LL),
 			BPF_ALU32_IMM(BPF_MOV, R1, 40),
@@ -4359,6 +4719,106 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffff00ff } },
 	},
+	{
+		"ALU64_ARSH_X: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_ARSH_X: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 12),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfff81234 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xf8123456 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 36),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_ARSH_X: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 32),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_X: Zero shift, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
+	{
+		"ALU64_ARSH_X: Zero shift, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_REG(BPF_ARSH, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
 	/* BPF_ALU | BPF_ARSH | BPF_K */
 	{
 		"ALU32_ARSH_K: -1234 >> 7 = -10",
@@ -4383,7 +4843,7 @@ static struct bpf_test tests[] = {
 		{ { 0, -1234 } }
 	},
 	{
-		"ALU_ARSH_K: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
+		"ALU64_ARSH_K: 0xff00ff0000000000 >> 40 = 0xffffffffffff00ff",
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 0xff00ff0000000000LL),
 			BPF_ALU64_IMM(BPF_ARSH, R0, 40),
@@ -4393,6 +4853,86 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffff00ff } },
 	},
+	{
+		"ALU64_ARSH_K: Shift < 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_RSH, R0, 12),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x56789abc } }
+	},
+	{
+		"ALU64_ARSH_K: Shift < 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 12),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfff81234 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift > 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xf8123456 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift > 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0xf123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 36),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift == 32, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x81234567 } }
+	},
+	{
+		"ALU64_ARSH_K: Shift == 32, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 32),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, -1 } }
+	},
+	{
+		"ALU64_ARSH_K: Zero shoft",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x8123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_ARSH, R0, 0),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } }
+	},
 	/* BPF_ALU | BPF_NEG */
 	{
 		"ALU_NEG: -(3) = -3",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (5 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 23:32   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls Johan Almbladh
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

This patch adds BPF_MUL tests for 64x32 and 64x64 multiply. Mainly
testing 32-bit JITs that implement ALU64 operations with two 32-bit
CPU registers per operand.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index b930fa35b9ef..eb61088a674f 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -3051,6 +3051,31 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 2147483647 } },
 	},
+	{
+		"ALU64_MUL_X: 64x64 multiply, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
+			BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
+			BPF_ALU64_REG(BPF_MUL, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xe5618cf0 } }
+	},
+	{
+		"ALU64_MUL_X: 64x64 multiply, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
+			BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
+			BPF_ALU64_REG(BPF_MUL, R0, R1),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x2236d88f } }
+	},
 	/* BPF_ALU | BPF_MUL | BPF_K */
 	{
 		"ALU_MUL_K: 2 * 3 = 6",
@@ -3161,6 +3186,29 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
+	{
+		"ALU64_MUL_K: 64x32 multiply, low word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_MUL, R0, 0x12345678),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xe242d208 } }
+	},
+	{
+		"ALU64_MUL_K: 64x32 multiply, high word",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+			BPF_ALU64_IMM(BPF_MUL, R0, 0x12345678),
+			BPF_ALU64_IMM(BPF_RSH, R0, 32),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xc28f5c28 } }
+	},
 	/* BPF_ALU | BPF_DIV | BPF_X */
 	{
 		"ALU_DIV_X: 6 / 2 = 3",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (6 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 23:52   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 09/14] bpf/tests: Add word-order tests for load/store of double words Johan Almbladh
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

32-bit JITs may implement complex ALU64 instructions using function calls.
The new tests check aspects related to this, such as register clobbering
and register argument re-ordering.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 138 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 138 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index eb61088a674f..1115e39630ce 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -1916,6 +1916,144 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, -1 } }
 	},
+	{
+		/*
+		 * Register (non-)clobbering test, in the case where a 32-bit
+		 * JIT implements complex ALU64 operations via function calls.
+		 */
+		"INT: Register clobbering, R1 updated",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_ALU32_IMM(BPF_MOV, R1, 123456789),
+			BPF_ALU32_IMM(BPF_MOV, R2, 2),
+			BPF_ALU32_IMM(BPF_MOV, R3, 3),
+			BPF_ALU32_IMM(BPF_MOV, R4, 4),
+			BPF_ALU32_IMM(BPF_MOV, R5, 5),
+			BPF_ALU32_IMM(BPF_MOV, R6, 6),
+			BPF_ALU32_IMM(BPF_MOV, R7, 7),
+			BPF_ALU32_IMM(BPF_MOV, R8, 8),
+			BPF_ALU32_IMM(BPF_MOV, R9, 9),
+			BPF_ALU64_IMM(BPF_DIV, R1, 123456789),
+			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
+			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
+			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
+			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
+			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
+			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
+			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
+			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
+			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
+			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		"INT: Register clobbering, R2 updated",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_ALU32_IMM(BPF_MOV, R1, 1),
+			BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789),
+			BPF_ALU32_IMM(BPF_MOV, R3, 3),
+			BPF_ALU32_IMM(BPF_MOV, R4, 4),
+			BPF_ALU32_IMM(BPF_MOV, R5, 5),
+			BPF_ALU32_IMM(BPF_MOV, R6, 6),
+			BPF_ALU32_IMM(BPF_MOV, R7, 7),
+			BPF_ALU32_IMM(BPF_MOV, R8, 8),
+			BPF_ALU32_IMM(BPF_MOV, R9, 9),
+			BPF_ALU64_IMM(BPF_DIV, R2, 123456789),
+			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
+			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
+			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
+			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
+			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
+			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
+			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
+			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
+			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
+			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+	{
+		/*
+		 * Test 32-bit JITs that implement complex ALU64 operations as
+		 * function calls R0 = f(R1, R2), and must re-arrange operands.
+		 */
+#define NUMER 0xfedcba9876543210ULL
+#define DENOM 0x0123456789abcdefULL
+		"ALU64_DIV X: Operand register permutations",
+		.u.insns_int = {
+			/* R0 / R2 */
+			BPF_LD_IMM64(R0, NUMER),
+			BPF_LD_IMM64(R2, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R0, R2),
+			BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R1 / R0 */
+			BPF_LD_IMM64(R1, NUMER),
+			BPF_LD_IMM64(R0, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R1, R0),
+			BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R0 / R1 */
+			BPF_LD_IMM64(R0, NUMER),
+			BPF_LD_IMM64(R1, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R0, R1),
+			BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R2 / R0 */
+			BPF_LD_IMM64(R2, NUMER),
+			BPF_LD_IMM64(R0, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R2, R0),
+			BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R2 / R1 */
+			BPF_LD_IMM64(R2, NUMER),
+			BPF_LD_IMM64(R1, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R2, R1),
+			BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* R1 / R2 */
+			BPF_LD_IMM64(R1, NUMER),
+			BPF_LD_IMM64(R2, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R1, R2),
+			BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			BPF_LD_IMM64(R0, 1),
+			/* R1 / R1 */
+			BPF_LD_IMM64(R1, NUMER),
+			BPF_ALU64_REG(BPF_DIV, R1, R1),
+			BPF_JMP_IMM(BPF_JEQ, R1, 1, 1),
+			BPF_EXIT_INSN(),
+			/* R2 / R2 */
+			BPF_LD_IMM64(R2, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R2, R2),
+			BPF_JMP_IMM(BPF_JEQ, R2, 1, 1),
+			BPF_EXIT_INSN(),
+			/* R3 / R4 */
+			BPF_LD_IMM64(R3, NUMER),
+			BPF_LD_IMM64(R4, DENOM),
+			BPF_ALU64_REG(BPF_DIV, R3, R4),
+			BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1),
+			BPF_EXIT_INSN(),
+			/* Successful return */
+			BPF_LD_IMM64(R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } },
+#undef NUMER
+#undef DENOM
+	},
 	{
 		"check: missing ret",
 		.u.insns = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 09/14] bpf/tests: Add word-order tests for load/store of double words
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (7 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 23:54   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 10/14] bpf/tests: Add branch conversion JIT test Johan Almbladh
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

A double word (64-bit) load/store may be implemented as two successive
32-bit operations, one for each word. Check that the order of those
operations is consistent with the machine endianness.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 1115e39630ce..8b94902702ed 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5417,6 +5417,42 @@ static struct bpf_test tests[] = {
 		{ { 0, 0xffffffff } },
 		.stack_depth = 40,
 	},
+	{
+		"STX_MEM_DW: Store double word: first word in memory",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0),
+			BPF_LD_IMM64(R1, 0x0123456789abcdefLL),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+#ifdef __BIG_ENDIAN
+		{ { 0, 0x01234567 } },
+#else
+		{ { 0, 0x89abcdef } },
+#endif
+		.stack_depth = 40,
+	},
+	{
+		"STX_MEM_DW: Store double word: second word in memory",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0),
+			BPF_LD_IMM64(R1, 0x0123456789abcdefLL),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -36),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+#ifdef __BIG_ENDIAN
+		{ { 0, 0x89abcdef } },
+#else
+		{ { 0, 0x01234567 } },
+#endif
+		.stack_depth = 40,
+	},
 	/* BPF_STX | BPF_ATOMIC | BPF_W/DW */
 	{
 		"STX_XADD_W: Test: 0x12 + 0x10 = 0x22",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (8 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 09/14] bpf/tests: Add word-order tests for load/store of double words Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-28 23:58   ` Yonghong Song
  2021-07-29  0:55   ` Yonghong Song
  2021-07-28 17:04 ` [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing Johan Almbladh
                   ` (3 subsequent siblings)
  13 siblings, 2 replies; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Some JITs may need to convert a conditional jump instruction to
to short PC-relative branch and a long unconditional jump, if the
PC-relative offset exceeds offset field width in the CPU instruction.
This test triggers such branch conversion on the 32-bit MIPS JIT.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 8b94902702ed..55914b6236aa 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -461,6 +461,36 @@ static int bpf_fill_stxdw(struct bpf_test *self)
 	return __bpf_fill_stxdw(self, BPF_DW);
 }
 
+static int bpf_fill_long_jmp(struct bpf_test *self)
+{
+	unsigned int len = BPF_MAXINSNS;
+	struct bpf_insn *insn;
+	int i;
+
+	insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
+	if (!insn)
+		return -ENOMEM;
+
+	insn[0] = BPF_ALU64_IMM(BPF_MOV, R0, 1);
+	insn[1] = BPF_JMP_IMM(BPF_JEQ, R0, 1, len - 2 - 1);
+
+	/*
+	 * Fill with a complex 64-bit operation that expands to a lot of
+	 * instructions on 32-bit JITs. The large jump offset can then
+	 * overflow the conditional branch field size, triggering a branch
+	 * conversion mechanism in some JITs.
+	 */
+	for (i = 2; i < len - 1; i++)
+		insn[i] = BPF_ALU64_IMM(BPF_MUL, R0, (i << 16) + i);
+
+	insn[len - 1] = BPF_EXIT_INSN();
+
+	self->u.ptr.insns = insn;
+	self->u.ptr.len = len;
+
+	return 0;
+}
+
 static struct bpf_test tests[] = {
 	{
 		"TAX",
@@ -6892,6 +6922,14 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
+	{	/* Mainly checking JIT here. */
+		"BPF_MAXINSNS: Very long conditional jump",
+		{ },
+		INTERNAL | FLAG_NO_DATA,
+		{ },
+		{ { 0, 1 } },
+		.fill_helper = bpf_fill_long_jmp,
+	},
 	{
 		"JMP_JA: Jump, gap, jump, ...",
 		{ },
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (9 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 10/14] bpf/tests: Add branch conversion JIT test Johan Almbladh
@ 2021-07-28 17:04 ` Johan Almbladh
  2021-07-29  0:09   ` Yonghong Song
  2021-07-28 17:05 ` [PATCH 12/14] bpf/tests: Add tests for atomic operations Johan Almbladh
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:04 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

On a 32-bit architecture, the context pointer should occupy the low
half of R0, and the other half should be zero.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 55914b6236aa..314af6eaeb92 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -2084,6 +2084,22 @@ static struct bpf_test tests[] = {
 #undef NUMER
 #undef DENOM
 	},
+#ifdef CONFIG_32BIT
+	{
+		"INT: 32-bit context pointer word order and zero-extension",
+		.u.insns_int = {
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_JMP32_IMM(BPF_JEQ, R1, 0, 3),
+			BPF_ALU64_IMM(BPF_RSH, R1, 32),
+			BPF_JMP32_IMM(BPF_JNE, R1, 0, 1),
+			BPF_ALU32_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 1 } }
+	},
+#endif
 	{
 		"check: missing ret",
 		.u.insns = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 12/14] bpf/tests: Add tests for atomic operations
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (10 preceding siblings ...)
  2021-07-28 17:04 ` [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing Johan Almbladh
@ 2021-07-28 17:05 ` Johan Almbladh
  2021-07-29  0:36   ` Yonghong Song
  2021-07-28 17:05 ` [PATCH 13/14] bpf/tests: Add tests for BPF_CMPXCHG Johan Almbladh
  2021-07-28 17:05 ` [PATCH 14/14] bpf/tests: Add tail call test suite Johan Almbladh
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:05 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Tests for each atomic arithmetic operation and BPF_XCHG, derived from
old BPF_XADD tests. The tests include BPF_W/DW and BPF_FETCH variants.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 252 ++++++++++++++++++++++++++++++++-----------------
 1 file changed, 166 insertions(+), 86 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 314af6eaeb92..ac50cb023324 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5500,49 +5500,6 @@ static struct bpf_test tests[] = {
 		.stack_depth = 40,
 	},
 	/* BPF_STX | BPF_ATOMIC | BPF_W/DW */
-	{
-		"STX_XADD_W: Test: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_W, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_W, BPF_ADD, R10, R0, -40),
-			BPF_LDX_MEM(BPF_W, R0, R10, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x22 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_W: Test side-effects, r10: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU64_REG(BPF_MOV, R1, R10),
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_W, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_W, BPF_ADD, R10, R0, -40),
-			BPF_ALU64_REG(BPF_MOV, R0, R10),
-			BPF_ALU64_REG(BPF_SUB, R0, R1),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_W: Test side-effects, r0: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_W, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_W, BPF_ADD, R10, R0, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x12 } },
-		.stack_depth = 40,
-	},
 	{
 		"STX_XADD_W: X + 1 + 1 + 1 + ...",
 		{ },
@@ -5551,49 +5508,6 @@ static struct bpf_test tests[] = {
 		{ { 0, 4134 } },
 		.fill_helper = bpf_fill_stxw,
 	},
-	{
-		"STX_XADD_DW: Test: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_DW, BPF_ADD, R10, R0, -40),
-			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x22 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_DW: Test side-effects, r10: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU64_REG(BPF_MOV, R1, R10),
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_DW, BPF_ADD, R10, R0, -40),
-			BPF_ALU64_REG(BPF_MOV, R0, R10),
-			BPF_ALU64_REG(BPF_SUB, R0, R1),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0 } },
-		.stack_depth = 40,
-	},
-	{
-		"STX_XADD_DW: Test side-effects, r0: 0x12 + 0x10 = 0x22",
-		.u.insns_int = {
-			BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
-			BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
-			BPF_ATOMIC_OP(BPF_DW, BPF_ADD, R10, R0, -40),
-			BPF_EXIT_INSN(),
-		},
-		INTERNAL,
-		{ },
-		{ { 0, 0x12 } },
-		.stack_depth = 40,
-	},
 	{
 		"STX_XADD_DW: X + 1 + 1 + 1 + ...",
 		{ },
@@ -5602,6 +5516,172 @@ static struct bpf_test tests[] = {
 		{ { 0, 4134 } },
 		.fill_helper = bpf_fill_stxdw,
 	},
+	/*
+	 * Exhaustive tests of atomic operation variants.
+	 * Individual tests are expanded from template macros for all
+	 * combinations of ALU operation, word size and fetching.
+	 */
+#define BPF_ATOMIC_OP_TEST1(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test: "			\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU32_IMM(BPF_MOV, R5, update),			\
+		BPF_ST_MEM(width, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R5, -40),			\
+		BPF_LDX_MEM(width, R0, R10, -40),			\
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,							\
+	{ },								\
+	{ { 0, result } },						\
+	.stack_depth = 40,						\
+}
+#define BPF_ATOMIC_OP_TEST2(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test side effects, r10: "	\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU64_REG(BPF_MOV, R1, R10),			\
+		BPF_ALU32_IMM(BPF_MOV, R0, update),			\
+		BPF_ST_MEM(BPF_W, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R0, -40),			\
+		BPF_ALU64_REG(BPF_MOV, R0, R10),			\
+		BPF_ALU64_REG(BPF_SUB, R0, R1),				\
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,							\
+	{ },								\
+	{ { 0, 0 } },							\
+	.stack_depth = 40,						\
+}
+#define BPF_ATOMIC_OP_TEST3(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test side effects, r0: "	\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU64_REG(BPF_MOV, R0, R10),			\
+		BPF_ALU32_IMM(BPF_MOV, R1, update),			\
+		BPF_ST_MEM(width, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R1, -40),			\
+		BPF_ALU64_REG(BPF_SUB, R0, R10),			\
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,                                                       \
+	{ },                                                            \
+	{ { 0, 0 } },                                                   \
+	.stack_depth = 40,                                              \
+}
+#define BPF_ATOMIC_OP_TEST4(width, op, logic, old, update, result)	\
+{									\
+	"BPF_ATOMIC | " #width ", " #op ": Test fetch: "		\
+		#old " " #logic " " #update " = " #result,		\
+	.u.insns_int = {						\
+		BPF_ALU32_IMM(BPF_MOV, R3, update),			\
+		BPF_ST_MEM(width, R10, -40, old),			\
+		BPF_ATOMIC_OP(width, op, R10, R3, -40),			\
+		BPF_ALU64_REG(BPF_MOV, R0, R3),                         \
+		BPF_EXIT_INSN(),					\
+	},								\
+	INTERNAL,                                                       \
+	{ },                                                            \
+	{ { 0, (op) & BPF_FETCH ? old : update } },			\
+	.stack_depth = 40,                                              \
+}
+	/* BPF_ATOMIC | BPF_W: BPF_ADD */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_W: BPF_ADD | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_DW: BPF_ADD */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_ADD, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_DW: BPF_ADD | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_ADD | BPF_FETCH, +, 0x12, 0xab, 0xbd),
+	/* BPF_ATOMIC | BPF_W: BPF_AND */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_AND, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_W: BPF_AND | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_DW: BPF_AND */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_AND, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_DW: BPF_AND | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_AND | BPF_FETCH, &, 0x12, 0xab, 0x02),
+	/* BPF_ATOMIC | BPF_W: BPF_OR */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_OR, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_W: BPF_OR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_DW: BPF_OR */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_OR, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_DW: BPF_OR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_OR | BPF_FETCH, |, 0x12, 0xab, 0xbb),
+	/* BPF_ATOMIC | BPF_W: BPF_XOR */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_W: BPF_XOR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_DW: BPF_XOR */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XOR, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_DW: BPF_XOR | BPF_FETCH */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XOR | BPF_FETCH, ^, 0x12, 0xab, 0xb9),
+	/* BPF_ATOMIC | BPF_W: BPF_XCHG */
+	BPF_ATOMIC_OP_TEST1(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST2(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST3(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST4(BPF_W, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	/* BPF_ATOMIC | BPF_DW: BPF_XCHG */
+	BPF_ATOMIC_OP_TEST1(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+	BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab),
+#undef BPF_ATOMIC_OP_TEST1
+#undef BPF_ATOMIC_OP_TEST2
+#undef BPF_ATOMIC_OP_TEST3
+#undef BPF_ATOMIC_OP_TEST4
 	/* BPF_JMP32 | BPF_JEQ | BPF_K */
 	{
 		"JMP32_JEQ_K: Small immediate",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 13/14] bpf/tests: Add tests for BPF_CMPXCHG
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (11 preceding siblings ...)
  2021-07-28 17:05 ` [PATCH 12/14] bpf/tests: Add tests for atomic operations Johan Almbladh
@ 2021-07-28 17:05 ` Johan Almbladh
  2021-07-29  0:45   ` Yonghong Song
  2021-07-28 17:05 ` [PATCH 14/14] bpf/tests: Add tail call test suite Johan Almbladh
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:05 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh

Tests for BPF_CMPXCHG with both word and double word operands. As with
the tests for other atomic operations, these tests only check the result
of the arithmetic operation. The atomicity of the operations is not tested.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
---
 lib/test_bpf.c | 166 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 166 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ac50cb023324..af5758151d0a 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5682,6 +5682,172 @@ static struct bpf_test tests[] = {
 #undef BPF_ATOMIC_OP_TEST2
 #undef BPF_ATOMIC_OP_TEST3
 #undef BPF_ATOMIC_OP_TEST4
+	/* BPF_ATOMIC | BPF_W, BPF_CMPXCHG */
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test successful return",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test successful store",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test failure return",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x76543210),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test failure store",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x76543210),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_LDX_MEM(BPF_W, R0, R10, -40),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x01234567 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_W, BPF_CMPXCHG: Test side effects",
+		.u.insns_int = {
+			BPF_ST_MEM(BPF_W, R10, -40, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0x01234567),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0x89abcdef),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R3, -40),
+			BPF_ALU32_REG(BPF_MOV, R0, R3),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0x89abcdef } },
+		.stack_depth = 40,
+	},
+	/* BPF_ATOMIC | BPF_DW, BPF_CMPXCHG */
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful return",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful store",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_STX_MEM(BPF_DW, R10, R0, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R2),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure return",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_ALU64_IMM(BPF_ADD, R0, 1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure store",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_ALU64_IMM(BPF_ADD, R0, 1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R1),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
+	{
+		"BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test side effects",
+		.u.insns_int = {
+			BPF_LD_IMM64(R1, 0x0123456789abcdefULL),
+			BPF_LD_IMM64(R2, 0xfecdba9876543210ULL),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_STX_MEM(BPF_DW, R10, R1, -40),
+			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
+			BPF_LD_IMM64(R0, 0xfecdba9876543210ULL),
+			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_ALU64_REG(BPF_SUB, R0, R2),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0 } },
+		.stack_depth = 40,
+	},
 	/* BPF_JMP32 | BPF_JEQ | BPF_K */
 	{
 		"JMP32_JEQ_K: Small immediate",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 14/14] bpf/tests: Add tail call test suite
  2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
                   ` (12 preceding siblings ...)
  2021-07-28 17:05 ` [PATCH 13/14] bpf/tests: Add tests for BPF_CMPXCHG Johan Almbladh
@ 2021-07-28 17:05 ` Johan Almbladh
  2021-07-29  2:56   ` Yonghong Song
  13 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-28 17:05 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: kafai, songliubraving, yhs, john.fastabend, kpsingh,
	Tony.Ambardar, netdev, bpf, Johan Almbladh, kernel test robot

While BPF_CALL instructions were tested implicitly by the cBPF-to-eBPF
translation, there has not been any tests for BPF_TAIL_CALL instructions.
The new test suite includes tests for tail call chaining, tail call count
tracking and error paths. It is mainly intended for JIT development and
testing.

Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 lib/test_bpf.c | 249 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 249 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index af5758151d0a..222d454b2ed4 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -8981,8 +8981,249 @@ static __init int test_bpf(void)
 	return err_cnt ? -EINVAL : 0;
 }
 
+struct tail_call_test {
+	const char *descr;
+	struct bpf_insn insns[MAX_INSNS];
+	int result;
+	int stack_depth;
+};
+
+/*
+ * Magic marker used in test snippets for tail calls below.
+ * BPF_LD/MOV to R2 and R2 with this immediate value is replaced
+ * with the proper values by the test runner.
+ */
+#define TAIL_CALL_MARKER 0x7a11ca11
+
+/* Special offset to indicate a NULL call target */
+#define TAIL_CALL_NULL 0x7fff
+
+#define TAIL_CALL(offset)			       \
+	BPF_LD_IMM64(R2, TAIL_CALL_MARKER),	       \
+	BPF_RAW_INSN(BPF_ALU | BPF_MOV | BPF_K, R3, 0, \
+		     offset, TAIL_CALL_MARKER),	       \
+	BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0)
+
+/*
+ * Tail call tests. Each test case may call any other test in the table,
+ * including itself, specified as a relative index offset from the calling
+ * test. The index TAIL_CALL_NULL can be used to specify a NULL target
+ * function to test the JIT error path.
+ */
+static struct tail_call_test tail_call_tests[] = {
+	{
+		"Tail call leaf",
+		.insns = {
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			BPF_ALU64_IMM(BPF_ADD, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 1,
+	},
+	{
+		"Tail call 2",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 2),
+			TAIL_CALL(-1),
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 3,
+	},
+	{
+		"Tail call 3",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 3),
+			TAIL_CALL(-1),
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 6,
+	},
+	{
+		"Tail call 4",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 4),
+			TAIL_CALL(-1),
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 10,
+	},
+	{
+		"Tail call error path, max count reached",
+		.insns = {
+			BPF_ALU64_IMM(BPF_ADD, R1, 1),
+			BPF_ALU64_REG(BPF_MOV, R0, R1),
+			TAIL_CALL(0),
+			BPF_EXIT_INSN(),
+		},
+		.result = MAX_TAIL_CALL_CNT + 1,
+	},
+	{
+		"Tail call error path, NULL target",
+		.insns = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			TAIL_CALL(TAIL_CALL_NULL),
+			BPF_ALU64_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 1,
+	},
+	{
+		/* Must be the last test */
+		"Tail call error path, index out of range",
+		.insns = {
+			BPF_ALU64_IMM(BPF_MOV, R0, -1),
+			TAIL_CALL(1),    /* Index out of range */
+			BPF_ALU64_IMM(BPF_MOV, R0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.result = 1,
+	},
+};
+
+static void __init destroy_tail_call_tests(struct bpf_array *progs)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(tail_call_tests); i++)
+		if (progs->ptrs[i])
+			bpf_prog_free(progs->ptrs[i]);
+	kfree(progs);
+}
+
+static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
+{
+	struct bpf_array *progs;
+	int ntests = ARRAY_SIZE(tail_call_tests);
+	int which, err;
+
+	/* Allocate the table of programs to be used for tall calls */
+	progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
+			GFP_KERNEL);
+	if (!progs)
+		goto out_nomem;
+
+	/* Create all eBPF programs and populate the table */
+	for (which = 0; which < ntests; which++) {
+		struct tail_call_test *test = &tail_call_tests[which];
+		struct bpf_prog *fp;
+		int len, i;
+
+		/* Compute the number of program instructions */
+		for (len = 0; len < MAX_INSNS; len++) {
+			struct bpf_insn *insn = &test->insns[len];
+
+			if (len < MAX_INSNS - 1 &&
+			    insn->code == (BPF_LD | BPF_DW | BPF_IMM))
+				len++;
+			if (insn->code == 0)
+				break;
+		}
+
+		/* Allocate and initialize the program */
+		fp = bpf_prog_alloc(bpf_prog_size(len), 0);
+		if (!fp)
+			goto out_nomem;
+
+		fp->len = len;
+		fp->type = BPF_PROG_TYPE_SOCKET_FILTER;
+		fp->aux->stack_depth = test->stack_depth;
+		memcpy(fp->insnsi, test->insns, len * sizeof(struct bpf_insn));
+
+		/* Relocate runtime tail call offsets and addresses */
+		for (i = 0; i < len; i++) {
+			struct bpf_insn *insn = &fp->insnsi[i];
+			int target;
+
+			if (insn->imm != TAIL_CALL_MARKER)
+				continue;
+
+			switch (insn->code) {
+			case BPF_LD | BPF_DW | BPF_IMM:
+				if (insn->dst_reg == R2) {
+					insn[0].imm = (u32)(long)progs;
+					insn[1].imm = ((u64)(long)progs) >> 32;
+				}
+				break;
+
+			case BPF_ALU | BPF_MOV | BPF_K:
+			case BPF_ALU64 | BPF_MOV | BPF_K:
+				if (insn->off == TAIL_CALL_NULL)
+					target = ntests;
+				else
+					target = which + insn->off;
+				if (insn->dst_reg == R3)
+					insn->imm = target;
+				break;
+			}
+		}
+
+		fp = bpf_prog_select_runtime(fp, &err);
+		if (err)
+			goto out_err;
+
+		progs->ptrs[which] = fp;
+	}
+
+	/* The last entry contains a NULL program pointer */
+	progs->map.max_entries = ntests + 1;
+	*pprogs = progs;
+	return 0;
+
+out_nomem:
+	err = -ENOMEM;
+
+out_err:
+	if (progs)
+		destroy_tail_call_tests(progs);
+	return err;
+}
+
+static __init int test_tail_calls(struct bpf_array *progs)
+{
+	int i, err_cnt = 0, pass_cnt = 0;
+	int jit_cnt = 0, run_cnt = 0;
+
+	for (i = 0; i < ARRAY_SIZE(tail_call_tests); i++) {
+		struct tail_call_test *test = &tail_call_tests[i];
+		struct bpf_prog *fp = progs->ptrs[i];
+		u64 duration;
+		int ret;
+
+		cond_resched();
+
+		pr_info("#%d %s ", i, test->descr);
+		if (!fp) {
+			err_cnt++;
+			continue;
+		}
+		pr_cont("jited:%u ", fp->jited);
+
+		run_cnt++;
+		if (fp->jited)
+			jit_cnt++;
+
+		ret = __run_one(fp, NULL, MAX_TESTRUNS, &duration);
+		if (ret == test->result) {
+			pr_cont("%lld PASS", duration);
+			pass_cnt++;
+		} else {
+			pr_cont("ret %d != %d FAIL", ret, test->result);
+			err_cnt++;
+		}
+	}
+
+	pr_info("%s: Summary: %d PASSED, %d FAILED, [%d/%d JIT'ed]\n",
+		__func__, pass_cnt, err_cnt, jit_cnt, run_cnt);
+
+	return err_cnt ? -EINVAL : 0;
+}
+
 static int __init test_bpf_init(void)
 {
+	struct bpf_array *progs = NULL;
 	int ret;
 
 	ret = prepare_bpf_tests();
@@ -8994,6 +9235,14 @@ static int __init test_bpf_init(void)
 	if (ret)
 		return ret;
 
+	ret = prepare_tail_call_tests(&progs);
+	if (ret)
+		return ret;
+	ret = test_tail_calls(progs);
+	destroy_tail_call_tests(progs);
+	if (ret)
+		return ret;
+
 	return test_skb_segment();
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases
  2021-07-28 17:04 ` [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases Johan Almbladh
@ 2021-07-28 22:31   ` Yonghong Song
  2021-07-29 21:30     ` Johan Almbladh
  0 siblings, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 22:31 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> An eBPF JIT may implement JMP32 operations in a different way than JMP,
> especially on 32-bit architectures. This patch adds a series of tests
> for JMP32 operations, mainly for testing JITs.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

LGTM with a few minor comments below.

Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   lib/test_bpf.c | 511 +++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 511 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index f6d5d30d01bf..bfac033db590 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -4398,6 +4398,517 @@ static struct bpf_test tests[] = {
>   		{ { 0, 4134 } },
>   		.fill_helper = bpf_fill_stxdw,
>   	},
> +	/* BPF_JMP32 | BPF_JEQ | BPF_K */
> +	{
> +		"JMP32_JEQ_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 123),
> +			BPF_JMP32_IMM(BPF_JEQ, R0, 321, 1),
> +			BPF_JMP32_IMM(BPF_JEQ, R0, 123, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 123 } }
> +	},
[...]
> +	/* BPF_JMP32 | BPF_JGT | BPF_X */
> +	{
> +		"JMP32_JGT_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
> +			BPF_JMP32_REG(BPF_JGT, R0, R1, 1),

Maybe change the offset from 1 to 2? Otherwise, this may jump to
   BPF_JMP32_REG(BPF_JGT, R0, R1, 1)
which will just do the same comparison and jump to BTT_EXIT_INSN()
which will also have R0 = 0xfffffffe at the end.

> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
> +			BPF_JMP32_REG(BPF_JGT, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JGE | BPF_K */
> +	{
> +		"JMP32_JGE_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 123),
> +			BPF_JMP32_IMM(BPF_JGE, R0, 124, 1),
> +			BPF_JMP32_IMM(BPF_JGE, R0, 123, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 123 } }
> +	},
> +	{
> +		"JMP32_JGE_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_JMP32_IMM(BPF_JGE, R0, 0xffffffff, 1),
> +			BPF_JMP32_IMM(BPF_JGE, R0, 0xfffffffe, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JGE | BPF_X */
> +	{
> +		"JMP32_JGE_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
> +			BPF_JMP32_REG(BPF_JGE, R0, R1, 1),

ditto, change offset 1 to 2?

> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffe),
> +			BPF_JMP32_REG(BPF_JGE, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JLT | BPF_K */
> +	{
> +		"JMP32_JLT_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 123),
> +			BPF_JMP32_IMM(BPF_JLT, R0, 123, 1),
> +			BPF_JMP32_IMM(BPF_JLT, R0, 124, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 123 } }
> +	},
> +	{
> +		"JMP32_JLT_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_JMP32_IMM(BPF_JLT, R0, 0xfffffffd, 1),
> +			BPF_JMP32_IMM(BPF_JLT, R0, 0xffffffff, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JLT | BPF_X */
> +	{
> +		"JMP32_JLT_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
> +			BPF_JMP32_REG(BPF_JLT, R0, R1, 1),

ditto.

> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
> +			BPF_JMP32_REG(BPF_JLT, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JLE | BPF_K */
> +	{
> +		"JMP32_JLE_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 123),
> +			BPF_JMP32_IMM(BPF_JLE, R0, 122, 1),
> +			BPF_JMP32_IMM(BPF_JLE, R0, 123, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 123 } }
> +	},
> +	{
> +		"JMP32_JLE_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_JMP32_IMM(BPF_JLE, R0, 0xfffffffd, 1),
> +			BPF_JMP32_IMM(BPF_JLE, R0, 0xfffffffe, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JLE | BPF_X */
> +	{
> +		"JMP32_JLE_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffd),
> +			BPF_JMP32_REG(BPF_JLE, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 0xfffffffe),
> +			BPF_JMP32_REG(BPF_JLE, R0, R1, 1),

ditto

> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xfffffffe } }
> +	},
> +	/* BPF_JMP32 | BPF_JSGT | BPF_K */
> +	{
> +		"JMP32_JSGT_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -123),
> +			BPF_JMP32_IMM(BPF_JSGT, R0, -123, 1),
> +			BPF_JMP32_IMM(BPF_JSGT, R0, -124, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -123 } }
> +	},
> +	{
> +		"JMP32_JSGT_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_JMP32_IMM(BPF_JSGT, R0, -12345678, 1),
> +			BPF_JMP32_IMM(BPF_JSGT, R0, -12345679, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSGT | BPF_X */
> +	{
> +		"JMP32_JSGT_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
> +			BPF_JMP32_REG(BPF_JSGT, R0, R1, 1),

ditto

> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345679),
> +			BPF_JMP32_REG(BPF_JSGT, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSGE | BPF_K */
> +	{
> +		"JMP32_JSGE_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -123),
> +			BPF_JMP32_IMM(BPF_JSGE, R0, -122, 1),
> +			BPF_JMP32_IMM(BPF_JSGE, R0, -123, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -123 } }
> +	},
> +	{
> +		"JMP32_JSGE_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_JMP32_IMM(BPF_JSGE, R0, -12345677, 1),
> +			BPF_JMP32_IMM(BPF_JSGE, R0, -12345678, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSGE | BPF_X */
> +	{
> +		"JMP32_JSGE_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345677),
> +			BPF_JMP32_REG(BPF_JSGE, R0, R1, 1),

ditto

> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
> +			BPF_JMP32_REG(BPF_JSGE, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSLT | BPF_K */
> +	{
> +		"JMP32_JSLT_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -123),
> +			BPF_JMP32_IMM(BPF_JSLT, R0, -123, 1),
> +			BPF_JMP32_IMM(BPF_JSLT, R0, -122, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -123 } }
> +	},
> +	{
> +		"JMP32_JSLT_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_JMP32_IMM(BPF_JSLT, R0, -12345678, 1),
> +			BPF_JMP32_IMM(BPF_JSLT, R0, -12345677, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSLT | BPF_X */
> +	{
> +		"JMP32_JSLT_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
> +			BPF_JMP32_REG(BPF_JSLT, R0, R1, 1),

ditto

> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345677),
> +			BPF_JMP32_REG(BPF_JSLT, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSLE | BPF_K */
> +	{
> +		"JMP32_JSLE_K: Small immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -123),
> +			BPF_JMP32_IMM(BPF_JSLE, R0, -124, 1),
> +			BPF_JMP32_IMM(BPF_JSLE, R0, -123, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -123 } }
> +	},
> +	{
> +		"JMP32_JSLE_K: Large immediate",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_JMP32_IMM(BPF_JSLE, R0, -12345679, 1),
> +			BPF_JMP32_IMM(BPF_JSLE, R0, -12345678, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
> +	/* BPF_JMP32 | BPF_JSLE | BPF_K */
> +	{
> +		"JMP32_JSLE_X",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, -12345678),
> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345679),
> +			BPF_JMP32_REG(BPF_JSLE, R0, R1, 1),

ditto

> +			BPF_ALU32_IMM(BPF_MOV, R1, -12345678),
> +			BPF_JMP32_REG(BPF_JSLE, R0, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, -12345678 } }
> +	},
>   	/* BPF_JMP | BPF_EXIT */
>   	{
>   		"JMP_EXIT",
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 02/14] bpf/tests: Add BPF_MOV tests for zero and sign extension
  2021-07-28 17:04 ` [PATCH 02/14] bpf/tests: Add BPF_MOV tests for zero and sign extension Johan Almbladh
@ 2021-07-28 22:36   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 22:36 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> Tests for ALU32 and ALU64 MOV with different sizes of the immediate
> value. Depending on the immediate field width of the native CPU
> instructions, a JIT may generate code differently depending on the
> immediate value. Test that zero or sign extension is performed as
> expected. Mainly for JIT testing.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 03/14] bpf/tests: Fix typos in test case descriptions
  2021-07-28 17:04 ` [PATCH 03/14] bpf/tests: Fix typos in test case descriptions Johan Almbladh
@ 2021-07-28 22:43   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 22:43 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> This patch corrects the test description in a number of cases where
> the description differed from what was actually tested and expected.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 04/14] bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations
  2021-07-28 17:04 ` [PATCH 04/14] bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
@ 2021-07-28 22:53   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 22:53 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> This patch adds tests of BPF_AND, BPF_OR and BPF_XOR with different
> magnitude of the immediate value. Mainly checking 32-bit JIT sub-word
> handling and zero/sign extension.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 05/14] bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH
  2021-07-28 17:04 ` [PATCH 05/14] bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
@ 2021-07-28 22:57   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 22:57 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> This patch adds more tests of ALU32 shift operations BPF_LSH and BPF_RSH,
> including the special case of a zero immediate. Also add corresponding
> BPF_ARSH tests which were missing for ALU32.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64
  2021-07-28 17:04 ` [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
@ 2021-07-28 23:30   ` Yonghong Song
  2021-07-29 12:34     ` Johan Almbladh
  0 siblings, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 23:30 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> This patch adds a number of tests for BPF_LSH, BPF_RSH amd BPF_ARSH
> ALU64 operations with values that may trigger different JIT code paths.
> Mainly testing 32-bit JITs that implement ALU64 operations with two
> 32-bit CPU registers per operand.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> ---
>   lib/test_bpf.c | 544 ++++++++++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 542 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index ef75dbf53ec2..b930fa35b9ef 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -4139,6 +4139,106 @@ static struct bpf_test tests[] = {
>   		{ },
>   		{ { 0, 0x80000000 } },
>   	},
> +	{
> +		"ALU64_LSH_X: Shift < 32, low word",
> +		.u.insns_int = {
> +			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 12),
> +			BPF_ALU64_REG(BPF_LSH, R0, R1),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xbcdef000 } }

In bpf_test struct, the result is defined as __u32
         struct {
                 int data_size;
                 __u32 result;
         } test[MAX_SUBTESTS];

But the above result 0xbcdef000 does not really capture the bpf program
return value, which should be 0x3456789abcdef000.
Can we change "result" type to __u64 so the result truly captures the 
program return value?

We have several other similar cases for the rest of this patch.

> +	},
> +	{
> +		"ALU64_LSH_X: Shift < 32, high word",
> +		.u.insns_int = {
> +			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 12),
> +			BPF_ALU64_REG(BPF_LSH, R0, R1),
> +			BPF_ALU64_IMM(BPF_RSH, R0, 32),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0x3456789a } }
> +	},
[...]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests
  2021-07-28 17:04 ` [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests Johan Almbladh
@ 2021-07-28 23:32   ` Yonghong Song
  2021-07-29 21:21     ` Johan Almbladh
  0 siblings, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 23:32 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> This patch adds BPF_MUL tests for 64x32 and 64x64 multiply. Mainly
> testing 32-bit JITs that implement ALU64 operations with two 32-bit
> CPU registers per operand.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> ---
>   lib/test_bpf.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 48 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index b930fa35b9ef..eb61088a674f 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -3051,6 +3051,31 @@ static struct bpf_test tests[] = {
>   		{ },
>   		{ { 0, 2147483647 } },
>   	},
> +	{
> +		"ALU64_MUL_X: 64x64 multiply, low word",
> +		.u.insns_int = {
> +			BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
> +			BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
> +			BPF_ALU64_REG(BPF_MUL, R0, R1),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xe5618cf0 } }

Same here. Maybe capture the true 64-bit R0 value?

> +	},
> +	{
> +		"ALU64_MUL_X: 64x64 multiply, high word",
> +		.u.insns_int = {
> +			BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
> +			BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
> +			BPF_ALU64_REG(BPF_MUL, R0, R1),
> +			BPF_ALU64_IMM(BPF_RSH, R0, 32),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0x2236d88f } }
> +	},
>   	/* BPF_ALU | BPF_MUL | BPF_K */
>   	{
>   		"ALU_MUL_K: 2 * 3 = 6",
> @@ -3161,6 +3186,29 @@ static struct bpf_test tests[] = {
>   		{ },
>   		{ { 0, 0x1 } },
>   	},
> +	{
> +		"ALU64_MUL_K: 64x32 multiply, low word",
> +		.u.insns_int = {
> +			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
> +			BPF_ALU64_IMM(BPF_MUL, R0, 0x12345678),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xe242d208 } }
> +	},
> +	{
> +		"ALU64_MUL_K: 64x32 multiply, high word",
> +		.u.insns_int = {
> +			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
> +			BPF_ALU64_IMM(BPF_MUL, R0, 0x12345678),
> +			BPF_ALU64_IMM(BPF_RSH, R0, 32),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 0xc28f5c28 } }
> +	},
>   	/* BPF_ALU | BPF_DIV | BPF_X */
>   	{
>   		"ALU_DIV_X: 6 / 2 = 3",
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls
  2021-07-28 17:04 ` [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls Johan Almbladh
@ 2021-07-28 23:52   ` Yonghong Song
  2021-07-29 21:17     ` Johan Almbladh
  0 siblings, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 23:52 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> 32-bit JITs may implement complex ALU64 instructions using function calls.
> The new tests check aspects related to this, such as register clobbering
> and register argument re-ordering.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> ---
>   lib/test_bpf.c | 138 +++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 138 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index eb61088a674f..1115e39630ce 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -1916,6 +1916,144 @@ static struct bpf_test tests[] = {
>   		{ },
>   		{ { 0, -1 } }
>   	},
> +	{
> +		/*
> +		 * Register (non-)clobbering test, in the case where a 32-bit
> +		 * JIT implements complex ALU64 operations via function calls.
> +		 */
> +		"INT: Register clobbering, R1 updated",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 123456789),
> +			BPF_ALU32_IMM(BPF_MOV, R2, 2),
> +			BPF_ALU32_IMM(BPF_MOV, R3, 3),
> +			BPF_ALU32_IMM(BPF_MOV, R4, 4),
> +			BPF_ALU32_IMM(BPF_MOV, R5, 5),
> +			BPF_ALU32_IMM(BPF_MOV, R6, 6),
> +			BPF_ALU32_IMM(BPF_MOV, R7, 7),
> +			BPF_ALU32_IMM(BPF_MOV, R8, 8),
> +			BPF_ALU32_IMM(BPF_MOV, R9, 9),
> +			BPF_ALU64_IMM(BPF_DIV, R1, 123456789),
> +			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
> +			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
> +			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
> +			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
> +			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
> +			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
> +			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
> +			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
> +			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
> +			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 1 } }
> +	},
> +	{
> +		"INT: Register clobbering, R2 updated",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_ALU32_IMM(BPF_MOV, R1, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789),
> +			BPF_ALU32_IMM(BPF_MOV, R3, 3),
> +			BPF_ALU32_IMM(BPF_MOV, R4, 4),
> +			BPF_ALU32_IMM(BPF_MOV, R5, 5),
> +			BPF_ALU32_IMM(BPF_MOV, R6, 6),
> +			BPF_ALU32_IMM(BPF_MOV, R7, 7),
> +			BPF_ALU32_IMM(BPF_MOV, R8, 8),
> +			BPF_ALU32_IMM(BPF_MOV, R9, 9),
> +			BPF_ALU64_IMM(BPF_DIV, R2, 123456789),
> +			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
> +			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
> +			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
> +			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
> +			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
> +			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
> +			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
> +			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
> +			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
> +			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 1 } }
> +	},

It looks like the above two tests, "R1 updated" and "R2 updated" should 
be very similar and the only difference is one immediate is 123456789 
and another is 2 * 123456789. But for generated code, they all just have
the final immediate. Could you explain what the difference in terms of
jit for the above two tests?

> +	{
> +		/*
> +		 * Test 32-bit JITs that implement complex ALU64 operations as
> +		 * function calls R0 = f(R1, R2), and must re-arrange operands.
> +		 */
> +#define NUMER 0xfedcba9876543210ULL
> +#define DENOM 0x0123456789abcdefULL
> +		"ALU64_DIV X: Operand register permutations",
> +		.u.insns_int = {
> +			/* R0 / R2 */
> +			BPF_LD_IMM64(R0, NUMER),
> +			BPF_LD_IMM64(R2, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R0, R2),
> +			BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			/* R1 / R0 */
> +			BPF_LD_IMM64(R1, NUMER),
> +			BPF_LD_IMM64(R0, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R1, R0),
> +			BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			/* R0 / R1 */
> +			BPF_LD_IMM64(R0, NUMER),
> +			BPF_LD_IMM64(R1, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R0, R1),
> +			BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			/* R2 / R0 */
> +			BPF_LD_IMM64(R2, NUMER),
> +			BPF_LD_IMM64(R0, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R2, R0),
> +			BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			/* R2 / R1 */
> +			BPF_LD_IMM64(R2, NUMER),
> +			BPF_LD_IMM64(R1, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R2, R1),
> +			BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			/* R1 / R2 */
> +			BPF_LD_IMM64(R1, NUMER),
> +			BPF_LD_IMM64(R2, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R1, R2),
> +			BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			BPF_LD_IMM64(R0, 1),

Do we need this BPF_LD_IMM64(R0, 1)?
First, if we have it, and next "BPF_ALU64_REG(BPF_DIV, R1, R1)"
generates incorrect value and exit and then you will get
exit value 1, which will signal the test success.

Second, if you don't have this R0 = 1, R0 will be DENOM
and you will be fine.

> +			/* R1 / R1 */
> +			BPF_LD_IMM64(R1, NUMER),
> +			BPF_ALU64_REG(BPF_DIV, R1, R1),
> +			BPF_JMP_IMM(BPF_JEQ, R1, 1, 1),
> +			BPF_EXIT_INSN(),
> +			/* R2 / R2 */
> +			BPF_LD_IMM64(R2, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R2, R2),
> +			BPF_JMP_IMM(BPF_JEQ, R2, 1, 1),
> +			BPF_EXIT_INSN(),
> +			/* R3 / R4 */
> +			BPF_LD_IMM64(R3, NUMER),
> +			BPF_LD_IMM64(R4, DENOM),
> +			BPF_ALU64_REG(BPF_DIV, R3, R4),
> +			BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1),
> +			BPF_EXIT_INSN(),
> +			/* Successful return */
> +			BPF_LD_IMM64(R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 1 } },
> +#undef NUMER
> +#undef DENOM
> +	},
>   	{
>   		"check: missing ret",
>   		.u.insns = {
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 09/14] bpf/tests: Add word-order tests for load/store of double words
  2021-07-28 17:04 ` [PATCH 09/14] bpf/tests: Add word-order tests for load/store of double words Johan Almbladh
@ 2021-07-28 23:54   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 23:54 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> A double word (64-bit) load/store may be implemented as two successive
> 32-bit operations, one for each word. Check that the order of those
> operations is consistent with the machine endianness.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-28 17:04 ` [PATCH 10/14] bpf/tests: Add branch conversion JIT test Johan Almbladh
@ 2021-07-28 23:58   ` Yonghong Song
  2021-07-29 12:45     ` Johan Almbladh
  2021-07-29  0:55   ` Yonghong Song
  1 sibling, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-28 23:58 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> Some JITs may need to convert a conditional jump instruction to
> to short PC-relative branch and a long unconditional jump, if the
> PC-relative offset exceeds offset field width in the CPU instruction.
> This test triggers such branch conversion on the 32-bit MIPS JIT.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> ---
>   lib/test_bpf.c | 38 ++++++++++++++++++++++++++++++++++++++
>   1 file changed, 38 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index 8b94902702ed..55914b6236aa 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -461,6 +461,36 @@ static int bpf_fill_stxdw(struct bpf_test *self)
>   	return __bpf_fill_stxdw(self, BPF_DW);
>   }
>   
> +static int bpf_fill_long_jmp(struct bpf_test *self)
> +{
> +	unsigned int len = BPF_MAXINSNS;
> +	struct bpf_insn *insn;
> +	int i;
> +
> +	insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
> +	if (!insn)
> +		return -ENOMEM;

When insn will be freed?

> +
> +	insn[0] = BPF_ALU64_IMM(BPF_MOV, R0, 1);
> +	insn[1] = BPF_JMP_IMM(BPF_JEQ, R0, 1, len - 2 - 1);
> +
> +	/*
> +	 * Fill with a complex 64-bit operation that expands to a lot of
> +	 * instructions on 32-bit JITs. The large jump offset can then
> +	 * overflow the conditional branch field size, triggering a branch
> +	 * conversion mechanism in some JITs.
> +	 */
> +	for (i = 2; i < len - 1; i++)
> +		insn[i] = BPF_ALU64_IMM(BPF_MUL, R0, (i << 16) + i);
> +
> +	insn[len - 1] = BPF_EXIT_INSN();
> +
> +	self->u.ptr.insns = insn;
> +	self->u.ptr.len = len;
> +
> +	return 0;
> +}
> +
[...]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing
  2021-07-28 17:04 ` [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing Johan Almbladh
@ 2021-07-29  0:09   ` Yonghong Song
  2021-07-29 13:29     ` Johan Almbladh
  0 siblings, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-29  0:09 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> On a 32-bit architecture, the context pointer should occupy the low
> half of R0, and the other half should be zero.

I think this is probably true. The word choice "should" indicates
this doesn't need to be the case if people choose a different
implementation, right?

> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> ---
>   lib/test_bpf.c | 16 ++++++++++++++++
>   1 file changed, 16 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index 55914b6236aa..314af6eaeb92 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -2084,6 +2084,22 @@ static struct bpf_test tests[] = {
>   #undef NUMER
>   #undef DENOM
>   	},
> +#ifdef CONFIG_32BIT
> +	{
> +		"INT: 32-bit context pointer word order and zero-extension",
> +		.u.insns_int = {
> +			BPF_ALU32_IMM(BPF_MOV, R0, 0),
> +			BPF_JMP32_IMM(BPF_JEQ, R1, 0, 3),
> +			BPF_ALU64_IMM(BPF_RSH, R1, 32),
> +			BPF_JMP32_IMM(BPF_JNE, R1, 0, 1),
> +			BPF_ALU32_IMM(BPF_MOV, R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		INTERNAL,
> +		{ },
> +		{ { 0, 1 } }
> +	},
> +#endif
>   	{
>   		"check: missing ret",
>   		.u.insns = {
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 12/14] bpf/tests: Add tests for atomic operations
  2021-07-28 17:05 ` [PATCH 12/14] bpf/tests: Add tests for atomic operations Johan Almbladh
@ 2021-07-29  0:36   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29  0:36 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:05 AM, Johan Almbladh wrote:
> Tests for each atomic arithmetic operation and BPF_XCHG, derived from
> old BPF_XADD tests. The tests include BPF_W/DW and BPF_FETCH variants.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 13/14] bpf/tests: Add tests for BPF_CMPXCHG
  2021-07-28 17:05 ` [PATCH 13/14] bpf/tests: Add tests for BPF_CMPXCHG Johan Almbladh
@ 2021-07-29  0:45   ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29  0:45 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:05 AM, Johan Almbladh wrote:
> Tests for BPF_CMPXCHG with both word and double word operands. As with
> the tests for other atomic operations, these tests only check the result
> of the arithmetic operation. The atomicity of the operations is not tested.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-28 17:04 ` [PATCH 10/14] bpf/tests: Add branch conversion JIT test Johan Almbladh
  2021-07-28 23:58   ` Yonghong Song
@ 2021-07-29  0:55   ` Yonghong Song
  2021-07-29 13:24     ` Johan Almbladh
  1 sibling, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-29  0:55 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf



On 7/28/21 10:04 AM, Johan Almbladh wrote:
> Some JITs may need to convert a conditional jump instruction to
> to short PC-relative branch and a long unconditional jump, if the
> PC-relative offset exceeds offset field width in the CPU instruction.
> This test triggers such branch conversion on the 32-bit MIPS JIT.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> ---
>   lib/test_bpf.c | 38 ++++++++++++++++++++++++++++++++++++++
>   1 file changed, 38 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index 8b94902702ed..55914b6236aa 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -461,6 +461,36 @@ static int bpf_fill_stxdw(struct bpf_test *self)
>   	return __bpf_fill_stxdw(self, BPF_DW);
>   }
>   
> +static int bpf_fill_long_jmp(struct bpf_test *self)
> +{
> +	unsigned int len = BPF_MAXINSNS;

BPF_MAXINSNS is 4096 as defined in uapi/linux/bpf_common.h.
Will it be able to trigger a PC relative branch + long
conditional jump?

> +	struct bpf_insn *insn;
> +	int i;
> +
> +	insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
> +	if (!insn)
> +		return -ENOMEM;
> +
> +	insn[0] = BPF_ALU64_IMM(BPF_MOV, R0, 1);
> +	insn[1] = BPF_JMP_IMM(BPF_JEQ, R0, 1, len - 2 - 1);
> +
> +	/*
> +	 * Fill with a complex 64-bit operation that expands to a lot of
> +	 * instructions on 32-bit JITs. The large jump offset can then
> +	 * overflow the conditional branch field size, triggering a branch
> +	 * conversion mechanism in some JITs.
> +	 */
> +	for (i = 2; i < len - 1; i++)
> +		insn[i] = BPF_ALU64_IMM(BPF_MUL, R0, (i << 16) + i);
> +
> +	insn[len - 1] = BPF_EXIT_INSN();
> +
> +	self->u.ptr.insns = insn;
> +	self->u.ptr.len = len;
> +
> +	return 0;
> +}
> +
[...]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 14/14] bpf/tests: Add tail call test suite
  2021-07-28 17:05 ` [PATCH 14/14] bpf/tests: Add tail call test suite Johan Almbladh
@ 2021-07-29  2:56   ` Yonghong Song
  2021-07-29 20:44     ` Johan Almbladh
  0 siblings, 1 reply; 43+ messages in thread
From: Yonghong Song @ 2021-07-29  2:56 UTC (permalink / raw)
  To: Johan Almbladh, ast, daniel, andrii
  Cc: kafai, songliubraving, john.fastabend, kpsingh, Tony.Ambardar,
	netdev, bpf, kernel test robot



On 7/28/21 10:05 AM, Johan Almbladh wrote:
> While BPF_CALL instructions were tested implicitly by the cBPF-to-eBPF
> translation, there has not been any tests for BPF_TAIL_CALL instructions.
> The new test suite includes tests for tail call chaining, tail call count
> tracking and error paths. It is mainly intended for JIT development and
> testing.
> 
> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> Reported-by: kernel test robot <lkp@intel.com>

The above Reported-by tag can be removed. This patch itself is not
about fixing an issue reported by kernel test robot...

The patch looks good to me except a few minor comments below.

Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   lib/test_bpf.c | 249 +++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 249 insertions(+)
> 
> diff --git a/lib/test_bpf.c b/lib/test_bpf.c
> index af5758151d0a..222d454b2ed4 100644
> --- a/lib/test_bpf.c
> +++ b/lib/test_bpf.c
> @@ -8981,8 +8981,249 @@ static __init int test_bpf(void)
>   	return err_cnt ? -EINVAL : 0;
>   }
>   
> +struct tail_call_test {
> +	const char *descr;
> +	struct bpf_insn insns[MAX_INSNS];
> +	int result;
> +	int stack_depth;
> +};
> +
> +/*
> + * Magic marker used in test snippets for tail calls below.
> + * BPF_LD/MOV to R2 and R2 with this immediate value is replaced
> + * with the proper values by the test runner.
> + */
> +#define TAIL_CALL_MARKER 0x7a11ca11
> +
> +/* Special offset to indicate a NULL call target */
> +#define TAIL_CALL_NULL 0x7fff
> +
> +#define TAIL_CALL(offset)			       \
> +	BPF_LD_IMM64(R2, TAIL_CALL_MARKER),	       \
> +	BPF_RAW_INSN(BPF_ALU | BPF_MOV | BPF_K, R3, 0, \
> +		     offset, TAIL_CALL_MARKER),	       \
> +	BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0)
> +
> +/*
> + * Tail call tests. Each test case may call any other test in the table,
> + * including itself, specified as a relative index offset from the calling
> + * test. The index TAIL_CALL_NULL can be used to specify a NULL target
> + * function to test the JIT error path.
> + */
> +static struct tail_call_test tail_call_tests[] = {
> +	{
> +		"Tail call leaf",
> +		.insns = {
> +			BPF_ALU64_REG(BPF_MOV, R0, R1),
> +			BPF_ALU64_IMM(BPF_ADD, R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = 1,
> +	},
> +	{
> +		"Tail call 2",
> +		.insns = {
> +			BPF_ALU64_IMM(BPF_ADD, R1, 2),
> +			TAIL_CALL(-1),
> +			BPF_ALU64_IMM(BPF_MOV, R0, -1),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = 3,
> +	},
> +	{
> +		"Tail call 3",
> +		.insns = {
> +			BPF_ALU64_IMM(BPF_ADD, R1, 3),
> +			TAIL_CALL(-1),
> +			BPF_ALU64_IMM(BPF_MOV, R0, -1),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = 6,
> +	},
> +	{
> +		"Tail call 4",
> +		.insns = {
> +			BPF_ALU64_IMM(BPF_ADD, R1, 4),
> +			TAIL_CALL(-1),
> +			BPF_ALU64_IMM(BPF_MOV, R0, -1),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = 10,
> +	},
> +	{
> +		"Tail call error path, max count reached",
> +		.insns = {
> +			BPF_ALU64_IMM(BPF_ADD, R1, 1),
> +			BPF_ALU64_REG(BPF_MOV, R0, R1),
> +			TAIL_CALL(0),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = MAX_TAIL_CALL_CNT + 1,
> +	},
> +	{
> +		"Tail call error path, NULL target",
> +		.insns = {
> +			BPF_ALU64_IMM(BPF_MOV, R0, -1),
> +			TAIL_CALL(TAIL_CALL_NULL),
> +			BPF_ALU64_IMM(BPF_MOV, R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = 1,
> +	},
> +	{
> +		/* Must be the last test */
> +		"Tail call error path, index out of range",
> +		.insns = {
> +			BPF_ALU64_IMM(BPF_MOV, R0, -1),
> +			TAIL_CALL(1),    /* Index out of range */
> +			BPF_ALU64_IMM(BPF_MOV, R0, 1),
> +			BPF_EXIT_INSN(),
> +		},
> +		.result = 1,
> +	},
> +};
> +
> +static void __init destroy_tail_call_tests(struct bpf_array *progs)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(tail_call_tests); i++)
> +		if (progs->ptrs[i])
> +			bpf_prog_free(progs->ptrs[i]);
> +	kfree(progs);
> +}
> +
> +static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
> +{
> +	struct bpf_array *progs;
> +	int ntests = ARRAY_SIZE(tail_call_tests);
> +	int which, err;

reverse christmas tree?

> +
> +	/* Allocate the table of programs to be used for tall calls */
> +	progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
> +			GFP_KERNEL);
> +	if (!progs)
> +		goto out_nomem;
> +
> +	/* Create all eBPF programs and populate the table */
> +	for (which = 0; which < ntests; which++) {
> +		struct tail_call_test *test = &tail_call_tests[which];
> +		struct bpf_prog *fp;
> +		int len, i;
> +
> +		/* Compute the number of program instructions */
> +		for (len = 0; len < MAX_INSNS; len++) {
> +			struct bpf_insn *insn = &test->insns[len];
> +
> +			if (len < MAX_INSNS - 1 &&
> +			    insn->code == (BPF_LD | BPF_DW | BPF_IMM))
> +				len++;
> +			if (insn->code == 0)
> +				break;
> +		}
> +
> +		/* Allocate and initialize the program */
> +		fp = bpf_prog_alloc(bpf_prog_size(len), 0);
> +		if (!fp)
> +			goto out_nomem;
> +
> +		fp->len = len;
> +		fp->type = BPF_PROG_TYPE_SOCKET_FILTER;
> +		fp->aux->stack_depth = test->stack_depth;
> +		memcpy(fp->insnsi, test->insns, len * sizeof(struct bpf_insn));
> +
> +		/* Relocate runtime tail call offsets and addresses */
> +		for (i = 0; i < len; i++) {
> +			struct bpf_insn *insn = &fp->insnsi[i];
> +			int target;
> +
> +			if (insn->imm != TAIL_CALL_MARKER)
> +				continue;
> +
> +			switch (insn->code) {
> +			case BPF_LD | BPF_DW | BPF_IMM:
> +				if (insn->dst_reg == R2) {

Looks like the above condition is not needed. It is always true.

> +					insn[0].imm = (u32)(long)progs;
> +					insn[1].imm = ((u64)(long)progs) >> 32;
> +				}
> +				break;
> +
> +			case BPF_ALU | BPF_MOV | BPF_K:
> +			case BPF_ALU64 | BPF_MOV | BPF_K:

case BPF_ALU64 | BPF_MOV | BPF_K is not needed.

> +				if (insn->off == TAIL_CALL_NULL)
> +					target = ntests;
> +				else
> +					target = which + insn->off;
> +				if (insn->dst_reg == R3)

the same here, insn->dst_reg == R3 is not needed. It is always true.

I suggest to set insn->off = 0. Otherwise, it is an illegal insn.
We won't issue here because we didn't invoke verifier. It is still
good to make the insn legel.

> +					insn->imm = target;



> +				break;
> +			}
> +		}
> +
> +		fp = bpf_prog_select_runtime(fp, &err);
> +		if (err)
> +			goto out_err;
> +
> +		progs->ptrs[which] = fp;
> +	}
> +
> +	/* The last entry contains a NULL program pointer */
> +	progs->map.max_entries = ntests + 1;
> +	*pprogs = progs;
> +	return 0;
> +
> +out_nomem:
> +	err = -ENOMEM;
> +
> +out_err:
> +	if (progs)
> +		destroy_tail_call_tests(progs);
> +	return err;
> +}
> +
[...]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64
  2021-07-28 23:30   ` Yonghong Song
@ 2021-07-29 12:34     ` Johan Almbladh
  2021-07-29 15:39       ` Yonghong Song
  0 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 12:34 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 1:30 AM Yonghong Song <yhs@fb.com> wrote:
> > @@ -4139,6 +4139,106 @@ static struct bpf_test tests[] = {
> >               { },
> >               { { 0, 0x80000000 } },
> >       },
> > +     {
> > +             "ALU64_LSH_X: Shift < 32, low word",
> > +             .u.insns_int = {
> > +                     BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
> > +                     BPF_ALU32_IMM(BPF_MOV, R1, 12),
> > +                     BPF_ALU64_REG(BPF_LSH, R0, R1),
> > +                     BPF_EXIT_INSN(),
> > +             },
> > +             INTERNAL,
> > +             { },
> > +             { { 0, 0xbcdef000 } }
>
> In bpf_test struct, the result is defined as __u32
>          struct {
>                  int data_size;
>                  __u32 result;
>          } test[MAX_SUBTESTS];
>
> But the above result 0xbcdef000 does not really capture the bpf program
> return value, which should be 0x3456789abcdef000.
> Can we change "result" type to __u64 so the result truly captures the
> program return value?

This was also my though at first, but I don't think that is possible.
As I understand it, the eBPF functions have the prototype int
func(struct *ctx). While the context pointer will have a different
size on 32-bit and 64-bit architectures, the return value will always
be 32 bits on most, or all, platforms.

> We have several other similar cases for the rest of this patch.

I have used two ways to check the full 64-bit result in such cases.

1) Load the expected result as a 64-bit value in a register. Then jump
conditionally if the result matches this value or not. The jump
destinations each set a distinct value in R0, which is finally
examined as the result.

2) Run the test twice. The first one returns the low 32-bits of R0.
The second adds a 32-bit right shift to return the high 32 bits.

When I first wrote the tests I tried to use as few complex
instructions not under test as possible, in order to test each
instruction in isolation. Since the 32-bit right shift is a much
simpler operation than conditional jumps, at least in the 32-bit MIPS
JIT, I chose method (2) for most of the tests. Existing tests seem to
use method (1), so in some cases I used that instead when adding more
tests of the same operation. The motivation for the simple one-by-one
tests is mainly convenience and better diagnostics during JIT
development. Both methods (1) and (2) are equally valid of course.

By the way, thanks a lot for the review, Yonghong!

Johan

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-28 23:58   ` Yonghong Song
@ 2021-07-29 12:45     ` Johan Almbladh
  2021-07-29 15:46       ` Yonghong Song
  0 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 12:45 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 1:59 AM Yonghong Song <yhs@fb.com> wrote:
> > +static int bpf_fill_long_jmp(struct bpf_test *self)
> > +{
> > +     unsigned int len = BPF_MAXINSNS;
> > +     struct bpf_insn *insn;
> > +     int i;
> > +
> > +     insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
> > +     if (!insn)
> > +             return -ENOMEM;
>
> When insn will be freed?

It is freed by the existing test runner code. If the fill_helper
member is set, the function destroy_bpf_tests frees the insn pointer
in that test case. This is the same as with other tests that use the
fill_helper facility.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-29  0:55   ` Yonghong Song
@ 2021-07-29 13:24     ` Johan Almbladh
  2021-07-29 15:50       ` Yonghong Song
  0 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 13:24 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 2:55 AM Yonghong Song <yhs@fb.com> wrote:
> > +static int bpf_fill_long_jmp(struct bpf_test *self)
> > +{
> > +     unsigned int len = BPF_MAXINSNS;
>
> BPF_MAXINSNS is 4096 as defined in uapi/linux/bpf_common.h.
> Will it be able to trigger a PC relative branch + long
> conditional jump?

It does, on the MIPS32 JIT. The ALU64 MUL instruction with a large
immediate was chosen since it expands to a lot of MIPS32 instructions:
2 to load the immediate, 1 to zero/sign extend it, and then 9 for the
64x64 multiply.

Other JITs will be different of course. On the other hand, other
architectures have other limitations that this test may not trigger
anyway. I added the test because I was implementing a non-trivial
iterative branch conversion logic in the MIPS32 JIT. One can argue
that when such complex JIT mechanisms are added, the test suite should
also be updated to cover that, especially if the mechanism handles
something that almost never occur in practice.

Since I was able to trigger the branch conversion with BPF_MAXINSNS
instructions, and no other test was using more, I left it at that.
However, should I or someone else work on the MIPS64 JIT, I think
updating the test suite so that similar special cases there are
triggered would be a valuable contribution.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing
  2021-07-29  0:09   ` Yonghong Song
@ 2021-07-29 13:29     ` Johan Almbladh
  2021-07-29 15:50       ` Yonghong Song
  0 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 13:29 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 2:09 AM Yonghong Song <yhs@fb.com> wrote:
> On 7/28/21 10:04 AM, Johan Almbladh wrote:
> > On a 32-bit architecture, the context pointer should occupy the low
> > half of R0, and the other half should be zero.
>
> I think this is probably true. The word choice "should" indicates
> this doesn't need to be the case if people choose a different
> implementation, right?
>

Right. To the best of my knowledge this is true. I can change the
wording to "will" to remove the ambiguity.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64
  2021-07-29 12:34     ` Johan Almbladh
@ 2021-07-29 15:39       ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29 15:39 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf



On 7/29/21 5:34 AM, Johan Almbladh wrote:
> On Thu, Jul 29, 2021 at 1:30 AM Yonghong Song <yhs@fb.com> wrote:
>>> @@ -4139,6 +4139,106 @@ static struct bpf_test tests[] = {
>>>                { },
>>>                { { 0, 0x80000000 } },
>>>        },
>>> +     {
>>> +             "ALU64_LSH_X: Shift < 32, low word",
>>> +             .u.insns_int = {
>>> +                     BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R1, 12),
>>> +                     BPF_ALU64_REG(BPF_LSH, R0, R1),
>>> +                     BPF_EXIT_INSN(),
>>> +             },
>>> +             INTERNAL,
>>> +             { },
>>> +             { { 0, 0xbcdef000 } }
>>
>> In bpf_test struct, the result is defined as __u32
>>           struct {
>>                   int data_size;
>>                   __u32 result;
>>           } test[MAX_SUBTESTS];
>>
>> But the above result 0xbcdef000 does not really capture the bpf program
>> return value, which should be 0x3456789abcdef000.
>> Can we change "result" type to __u64 so the result truly captures the
>> program return value?
> 
> This was also my though at first, but I don't think that is possible.
> As I understand it, the eBPF functions have the prototype int
> func(struct *ctx). While the context pointer will have a different
> size on 32-bit and 64-bit architectures, the return value will always
> be 32 bits on most, or all, platforms.

Thanks for explanation. Yes, all BPF_PROG_RUN variables have bpf program
return type u32, so you are right, we cannot really check prog return
value against a 64bit R0.

> 
>> We have several other similar cases for the rest of this patch.
> 
> I have used two ways to check the full 64-bit result in such cases.
> 
> 1) Load the expected result as a 64-bit value in a register. Then jump
> conditionally if the result matches this value or not. The jump
> destinations each set a distinct value in R0, which is finally
> examined as the result.
> 
> 2) Run the test twice. The first one returns the low 32-bits of R0.
> The second adds a 32-bit right shift to return the high 32 bits.
> 
> When I first wrote the tests I tried to use as few complex
> instructions not under test as possible, in order to test each
> instruction in isolation. Since the 32-bit right shift is a much
> simpler operation than conditional jumps, at least in the 32-bit MIPS
> JIT, I chose method (2) for most of the tests. Existing tests seem to
> use method (1), so in some cases I used that instead when adding more
> tests of the same operation. The motivation for the simple one-by-one
> tests is mainly convenience and better diagnostics during JIT
> development. Both methods (1) and (2) are equally valid of course.

it is totally okay to use (2). Your tests are fine in that regard.

> 
> By the way, thanks a lot for the review, Yonghong!

You are welcome!

> 
> Johan
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-29 12:45     ` Johan Almbladh
@ 2021-07-29 15:46       ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29 15:46 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf



On 7/29/21 5:45 AM, Johan Almbladh wrote:
> On Thu, Jul 29, 2021 at 1:59 AM Yonghong Song <yhs@fb.com> wrote:
>>> +static int bpf_fill_long_jmp(struct bpf_test *self)
>>> +{
>>> +     unsigned int len = BPF_MAXINSNS;
>>> +     struct bpf_insn *insn;
>>> +     int i;
>>> +
>>> +     insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
>>> +     if (!insn)
>>> +             return -ENOMEM;
>>
>> When insn will be freed?
> 
> It is freed by the existing test runner code. If the fill_helper
> member is set, the function destroy_bpf_tests frees the insn pointer
> in that test case. This is the same as with other tests that use the
> fill_helper facility.

Sounds good. Thanks for explanation.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 10/14] bpf/tests: Add branch conversion JIT test
  2021-07-29 13:24     ` Johan Almbladh
@ 2021-07-29 15:50       ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29 15:50 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf



On 7/29/21 6:24 AM, Johan Almbladh wrote:
> On Thu, Jul 29, 2021 at 2:55 AM Yonghong Song <yhs@fb.com> wrote:
>>> +static int bpf_fill_long_jmp(struct bpf_test *self)
>>> +{
>>> +     unsigned int len = BPF_MAXINSNS;
>>
>> BPF_MAXINSNS is 4096 as defined in uapi/linux/bpf_common.h.
>> Will it be able to trigger a PC relative branch + long
>> conditional jump?
> 
> It does, on the MIPS32 JIT. The ALU64 MUL instruction with a large
> immediate was chosen since it expands to a lot of MIPS32 instructions:
> 2 to load the immediate, 1 to zero/sign extend it, and then 9 for the
> 64x64 multiply.

Maybe added a comment in the code to mention that with BPF_MAXINSNS
PC relative branch + long conditional jump can be triggered on MIPS32
JIT. Other architecture may need a different/larger number?

> 
> Other JITs will be different of course. On the other hand, other
> architectures have other limitations that this test may not trigger
> anyway. I added the test because I was implementing a non-trivial
> iterative branch conversion logic in the MIPS32 JIT. One can argue
> that when such complex JIT mechanisms are added, the test suite should
> also be updated to cover that, especially if the mechanism handles
> something that almost never occur in practice.
> 
> Since I was able to trigger the branch conversion with BPF_MAXINSNS
> instructions, and no other test was using more, I left it at that.
> However, should I or someone else work on the MIPS64 JIT, I think
> updating the test suite so that similar special cases there are
> triggered would be a valuable contribution.
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing
  2021-07-29 13:29     ` Johan Almbladh
@ 2021-07-29 15:50       ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29 15:50 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf



On 7/29/21 6:29 AM, Johan Almbladh wrote:
> On Thu, Jul 29, 2021 at 2:09 AM Yonghong Song <yhs@fb.com> wrote:
>> On 7/28/21 10:04 AM, Johan Almbladh wrote:
>>> On a 32-bit architecture, the context pointer should occupy the low
>>> half of R0, and the other half should be zero.
>>
>> I think this is probably true. The word choice "should" indicates
>> this doesn't need to be the case if people choose a different
>> implementation, right?
>>
> 
> Right. To the best of my knowledge this is true. I can change the
> wording to "will" to remove the ambiguity.

Sounds good. thanks!

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 14/14] bpf/tests: Add tail call test suite
  2021-07-29  2:56   ` Yonghong Song
@ 2021-07-29 20:44     ` Johan Almbladh
  0 siblings, 0 replies; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 20:44 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf, kernel test robot

On Thu, Jul 29, 2021 at 4:56 AM Yonghong Song <yhs@fb.com> wrote:
> > +static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
> > +{
> > +     struct bpf_array *progs;
> > +     int ntests = ARRAY_SIZE(tail_call_tests);
> > +     int which, err;
>
> reverse christmas tree?

Will do.

> > +
> > +     /* Allocate the table of programs to be used for tall calls */
> > +     progs = kzalloc(sizeof(*progs) + (ntests + 1) * sizeof(progs->ptrs[0]),
> > +                     GFP_KERNEL);
> > +     if (!progs)
> > +             goto out_nomem;
> > +
> > +     /* Create all eBPF programs and populate the table */
> > +     for (which = 0; which < ntests; which++) {
> > +             struct tail_call_test *test = &tail_call_tests[which];
> > +             struct bpf_prog *fp;
> > +             int len, i;
> > +
> > +             /* Compute the number of program instructions */
> > +             for (len = 0; len < MAX_INSNS; len++) {
> > +                     struct bpf_insn *insn = &test->insns[len];
> > +
> > +                     if (len < MAX_INSNS - 1 &&
> > +                         insn->code == (BPF_LD | BPF_DW | BPF_IMM))
> > +                             len++;
> > +                     if (insn->code == 0)
> > +                             break;
> > +             }
> > +
> > +             /* Allocate and initialize the program */
> > +             fp = bpf_prog_alloc(bpf_prog_size(len), 0);
> > +             if (!fp)
> > +                     goto out_nomem;
> > +
> > +             fp->len = len;
> > +             fp->type = BPF_PROG_TYPE_SOCKET_FILTER;
> > +             fp->aux->stack_depth = test->stack_depth;
> > +             memcpy(fp->insnsi, test->insns, len * sizeof(struct bpf_insn));
> > +
> > +             /* Relocate runtime tail call offsets and addresses */
> > +             for (i = 0; i < len; i++) {
> > +                     struct bpf_insn *insn = &fp->insnsi[i];
> > +                     int target;
> > +
> > +                     if (insn->imm != TAIL_CALL_MARKER)
> > +                             continue;
> > +
> > +                     switch (insn->code) {
> > +                     case BPF_LD | BPF_DW | BPF_IMM:
> > +                             if (insn->dst_reg == R2) {
>
> Looks like the above condition is not needed. It is always true.
>
> > +                                     insn[0].imm = (u32)(long)progs;
> > +                                     insn[1].imm = ((u64)(long)progs) >> 32;
> > +                             }
> > +                             break;
> > +
> > +                     case BPF_ALU | BPF_MOV | BPF_K:
> > +                     case BPF_ALU64 | BPF_MOV | BPF_K:
>
> case BPF_ALU64 | BPF_MOV | BPF_K is not needed.
>
> > +                             if (insn->off == TAIL_CALL_NULL)
> > +                                     target = ntests;
> > +                             else
> > +                                     target = which + insn->off;
> > +                             if (insn->dst_reg == R3)
>
> the same here, insn->dst_reg == R3 is not needed. It is always true.

I added the register checks to further restrict the cases when
rewriting is done, but it might be more clear if the instruction is
always rewritten whenever the tail call marker is set. I can remove
the unnecessary conditions.

> I suggest to set insn->off = 0. Otherwise, it is an illegal insn.
> We won't issue here because we didn't invoke verifier. It is still
> good to make the insn legel.

I agree. Fixing it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls
  2021-07-28 23:52   ` Yonghong Song
@ 2021-07-29 21:17     ` Johan Almbladh
  2021-07-29 22:54       ` Yonghong Song
  0 siblings, 1 reply; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 21:17 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 1:52 AM Yonghong Song <yhs@fb.com> wrote:
> > +             /*
> > +              * Register (non-)clobbering test, in the case where a 32-bit
> > +              * JIT implements complex ALU64 operations via function calls.
> > +              */
> > +             "INT: Register clobbering, R1 updated",
> > +             .u.insns_int = {
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 0),
> > +                     BPF_ALU32_IMM(BPF_MOV, R1, 123456789),
> > +                     BPF_ALU32_IMM(BPF_MOV, R2, 2),
> > +                     BPF_ALU32_IMM(BPF_MOV, R3, 3),
> > +                     BPF_ALU32_IMM(BPF_MOV, R4, 4),
> > +                     BPF_ALU32_IMM(BPF_MOV, R5, 5),
> > +                     BPF_ALU32_IMM(BPF_MOV, R6, 6),
> > +                     BPF_ALU32_IMM(BPF_MOV, R7, 7),
> > +                     BPF_ALU32_IMM(BPF_MOV, R8, 8),
> > +                     BPF_ALU32_IMM(BPF_MOV, R9, 9),
> > +                     BPF_ALU64_IMM(BPF_DIV, R1, 123456789),
> > +                     BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
> > +                     BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
> > +                     BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
> > +                     BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
> > +                     BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
> > +                     BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
> > +                     BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
> > +                     BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
> > +                     BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
> > +                     BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 1),
> > +                     BPF_EXIT_INSN(),
> > +             },
> > +             INTERNAL,
> > +             { },
> > +             { { 0, 1 } }
> > +     },
> > +     {
> > +             "INT: Register clobbering, R2 updated",
> > +             .u.insns_int = {
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 0),
> > +                     BPF_ALU32_IMM(BPF_MOV, R1, 1),
> > +                     BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789),
> > +                     BPF_ALU32_IMM(BPF_MOV, R3, 3),
> > +                     BPF_ALU32_IMM(BPF_MOV, R4, 4),
> > +                     BPF_ALU32_IMM(BPF_MOV, R5, 5),
> > +                     BPF_ALU32_IMM(BPF_MOV, R6, 6),
> > +                     BPF_ALU32_IMM(BPF_MOV, R7, 7),
> > +                     BPF_ALU32_IMM(BPF_MOV, R8, 8),
> > +                     BPF_ALU32_IMM(BPF_MOV, R9, 9),
> > +                     BPF_ALU64_IMM(BPF_DIV, R2, 123456789),
> > +                     BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
> > +                     BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
> > +                     BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
> > +                     BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
> > +                     BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
> > +                     BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
> > +                     BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
> > +                     BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
> > +                     BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
> > +                     BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 1),
> > +                     BPF_EXIT_INSN(),
> > +             },
> > +             INTERNAL,
> > +             { },
> > +             { { 0, 1 } }
> > +     },
>
> It looks like the above two tests, "R1 updated" and "R2 updated" should
> be very similar and the only difference is one immediate is 123456789
> and another is 2 * 123456789. But for generated code, they all just have
> the final immediate. Could you explain what the difference in terms of
> jit for the above two tests?

When a BPF_CALL instruction is executed, the eBPF assembler have
already saved any caller-saved registers that must be preserved, put
the arguments in R1-R5, and expects a return value in R0. It is just
for the JIT to emit the call.

Not so when an eBPF instruction is implemented by a function call,
like ALU64 DIV in a 32-bit JIT. In this case, the function call is
unexpected by the eBPF assembler, and must be invisible to it. Now the
JIT must take care of saving all caller-saved registers on stack, put
the operands in the right argument registers, put the return value in
the destination register, and finally restore all caller-saved
registers without overwriting the computed result.

The test checks that all other registers retain their value after such
a hidden function call. However, one register will contain the result.
In order to verify that all registers are saved and restored properly,
we must vary the destination and run it two times. It is not the
result of the operation that its tested, it is absence of possible
side effects.

I can put a more elaborate description in the comment to explain this.

>
> > +     {
> > +             /*
> > +              * Test 32-bit JITs that implement complex ALU64 operations as
> > +              * function calls R0 = f(R1, R2), and must re-arrange operands.
> > +              */
> > +#define NUMER 0xfedcba9876543210ULL
> > +#define DENOM 0x0123456789abcdefULL
> > +             "ALU64_DIV X: Operand register permutations",
> > +             .u.insns_int = {
> > +                     /* R0 / R2 */
> > +                     BPF_LD_IMM64(R0, NUMER),
> > +                     BPF_LD_IMM64(R2, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R0, R2),
> > +                     BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R1 / R0 */
> > +                     BPF_LD_IMM64(R1, NUMER),
> > +                     BPF_LD_IMM64(R0, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R1, R0),
> > +                     BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R0 / R1 */
> > +                     BPF_LD_IMM64(R0, NUMER),
> > +                     BPF_LD_IMM64(R1, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R0, R1),
> > +                     BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R2 / R0 */
> > +                     BPF_LD_IMM64(R2, NUMER),
> > +                     BPF_LD_IMM64(R0, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R2, R0),
> > +                     BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R2 / R1 */
> > +                     BPF_LD_IMM64(R2, NUMER),
> > +                     BPF_LD_IMM64(R1, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R2, R1),
> > +                     BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R1 / R2 */
> > +                     BPF_LD_IMM64(R1, NUMER),
> > +                     BPF_LD_IMM64(R2, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R1, R2),
> > +                     BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     BPF_LD_IMM64(R0, 1),
>
> Do we need this BPF_LD_IMM64(R0, 1)?
> First, if we have it, and next "BPF_ALU64_REG(BPF_DIV, R1, R1)"
> generates incorrect value and exit and then you will get
> exit value 1, which will signal the test success.
>
> Second, if you don't have this R0 = 1, R0 will be DENOM
> and you will be fine.

Good catch! No, it should not be there. Maybe left from previous
debugging, or a copy-and-paste error. I'll remove it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests
  2021-07-28 23:32   ` Yonghong Song
@ 2021-07-29 21:21     ` Johan Almbladh
  0 siblings, 0 replies; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 21:21 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 1:32 AM Yonghong Song <yhs@fb.com> wrote:
> > @@ -3051,6 +3051,31 @@ static struct bpf_test tests[] = {
> >               { },
> >               { { 0, 2147483647 } },
> >       },
> > +     {
> > +             "ALU64_MUL_X: 64x64 multiply, low word",
> > +             .u.insns_int = {
> > +                     BPF_LD_IMM64(R0, 0x0fedcba987654321LL),
> > +                     BPF_LD_IMM64(R1, 0x123456789abcdef0LL),
> > +                     BPF_ALU64_REG(BPF_MUL, R0, R1),
> > +                     BPF_EXIT_INSN(),
> > +             },
> > +             INTERNAL,
> > +             { },
> > +             { { 0, 0xe5618cf0 } }
>
> Same here. Maybe capture the true 64-bit R0 value?

Same as the LSH/RSH/ARSH tests. Uses 32-bit shift to test high and low
words in two runs.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases
  2021-07-28 22:31   ` Yonghong Song
@ 2021-07-29 21:30     ` Johan Almbladh
  0 siblings, 0 replies; 43+ messages in thread
From: Johan Almbladh @ 2021-07-29 21:30 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf

On Thu, Jul 29, 2021 at 12:31 AM Yonghong Song <yhs@fb.com> wrote:
> > +     /* BPF_JMP32 | BPF_JGT | BPF_X */
> > +     {
> > +             "JMP32_JGT_X",
> > +             .u.insns_int = {
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 0xfffffffe),
> > +                     BPF_ALU32_IMM(BPF_MOV, R1, 0xffffffff),
> > +                     BPF_JMP32_REG(BPF_JGT, R0, R1, 1),
>
> Maybe change the offset from 1 to 2? Otherwise, this may jump to
>    BPF_JMP32_REG(BPF_JGT, R0, R1, 1)
> which will just do the same comparison and jump to BTT_EXIT_INSN()
> which will also have R0 = 0xfffffffe at the end.

You are right. All BPF_X versions should have the first jump offset
incremented by one to account for the extra MOV that is not present in
the BPF_K version of the test. I'll correct it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls
  2021-07-29 21:17     ` Johan Almbladh
@ 2021-07-29 22:54       ` Yonghong Song
  0 siblings, 0 replies; 43+ messages in thread
From: Yonghong Song @ 2021-07-29 22:54 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Tony Ambardar, Networking, bpf



On 7/29/21 2:17 PM, Johan Almbladh wrote:
> On Thu, Jul 29, 2021 at 1:52 AM Yonghong Song <yhs@fb.com> wrote:
>>> +             /*
>>> +              * Register (non-)clobbering test, in the case where a 32-bit
>>> +              * JIT implements complex ALU64 operations via function calls.
>>> +              */
>>> +             "INT: Register clobbering, R1 updated",
>>> +             .u.insns_int = {
>>> +                     BPF_ALU32_IMM(BPF_MOV, R0, 0),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R1, 123456789),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R2, 2),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R3, 3),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R4, 4),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R5, 5),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R6, 6),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R7, 7),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R8, 8),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R9, 9),
>>> +                     BPF_ALU64_IMM(BPF_DIV, R1, 123456789),
>>> +                     BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
>>> +                     BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
>>> +                     BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
>>> +                     BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
>>> +                     BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
>>> +                     BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
>>> +                     BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
>>> +                     BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
>>> +                     BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
>>> +                     BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R0, 1),
>>> +                     BPF_EXIT_INSN(),
>>> +             },
>>> +             INTERNAL,
>>> +             { },
>>> +             { { 0, 1 } }
>>> +     },
>>> +     {
>>> +             "INT: Register clobbering, R2 updated",
>>> +             .u.insns_int = {
>>> +                     BPF_ALU32_IMM(BPF_MOV, R0, 0),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R1, 1),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R3, 3),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R4, 4),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R5, 5),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R6, 6),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R7, 7),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R8, 8),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R9, 9),
>>> +                     BPF_ALU64_IMM(BPF_DIV, R2, 123456789),
>>> +                     BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
>>> +                     BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
>>> +                     BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
>>> +                     BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
>>> +                     BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
>>> +                     BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
>>> +                     BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
>>> +                     BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
>>> +                     BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
>>> +                     BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
>>> +                     BPF_ALU32_IMM(BPF_MOV, R0, 1),
>>> +                     BPF_EXIT_INSN(),
>>> +             },
>>> +             INTERNAL,
>>> +             { },
>>> +             { { 0, 1 } }
>>> +     },
>>
>> It looks like the above two tests, "R1 updated" and "R2 updated" should
>> be very similar and the only difference is one immediate is 123456789
>> and another is 2 * 123456789. But for generated code, they all just have
>> the final immediate. Could you explain what the difference in terms of
>> jit for the above two tests?
> 
> When a BPF_CALL instruction is executed, the eBPF assembler have
> already saved any caller-saved registers that must be preserved, put
> the arguments in R1-R5, and expects a return value in R0. It is just
> for the JIT to emit the call.
> 
> Not so when an eBPF instruction is implemented by a function call,
> like ALU64 DIV in a 32-bit JIT. In this case, the function call is
> unexpected by the eBPF assembler, and must be invisible to it. Now the
> JIT must take care of saving all caller-saved registers on stack, put
> the operands in the right argument registers, put the return value in
> the destination register, and finally restore all caller-saved
> registers without overwriting the computed result.
> 
> The test checks that all other registers retain their value after such
> a hidden function call. However, one register will contain the result.
> In order to verify that all registers are saved and restored properly,
> we must vary the destination and run it two times. It is not the
> result of the operation that its tested, it is absence of possible
> side effects.
> 
> I can put a more elaborate description in the comment to explain this.

Indeed, an elaborate description as comments will be great.

> 
>>
>>> +     {
>>> +             /*
>>> +              * Test 32-bit JITs that implement complex ALU64 operations as
>>> +              * function calls R0 = f(R1, R2), and must re-arrange operands.
>>> +              */
[...]

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2021-07-29 22:54 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-28 17:04 [PATCH 00/14] bpf/tests: Extend the eBPF test suite Johan Almbladh
2021-07-28 17:04 ` [PATCH 01/14] bpf/tests: Add BPF_JMP32 test cases Johan Almbladh
2021-07-28 22:31   ` Yonghong Song
2021-07-29 21:30     ` Johan Almbladh
2021-07-28 17:04 ` [PATCH 02/14] bpf/tests: Add BPF_MOV tests for zero and sign extension Johan Almbladh
2021-07-28 22:36   ` Yonghong Song
2021-07-28 17:04 ` [PATCH 03/14] bpf/tests: Fix typos in test case descriptions Johan Almbladh
2021-07-28 22:43   ` Yonghong Song
2021-07-28 17:04 ` [PATCH 04/14] bpf/tests: Add more tests of ALU32 and ALU64 bitwise operations Johan Almbladh
2021-07-28 22:53   ` Yonghong Song
2021-07-28 17:04 ` [PATCH 05/14] bpf/tests: Add more ALU32 tests for BPF_LSH/RSH/ARSH Johan Almbladh
2021-07-28 22:57   ` Yonghong Song
2021-07-28 17:04 ` [PATCH 06/14] bpf/tests: Add more BPF_LSH/RSH/ARSH tests for ALU64 Johan Almbladh
2021-07-28 23:30   ` Yonghong Song
2021-07-29 12:34     ` Johan Almbladh
2021-07-29 15:39       ` Yonghong Song
2021-07-28 17:04 ` [PATCH 07/14] bpf/tests: Add more ALU64 BPF_MUL tests Johan Almbladh
2021-07-28 23:32   ` Yonghong Song
2021-07-29 21:21     ` Johan Almbladh
2021-07-28 17:04 ` [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented with function calls Johan Almbladh
2021-07-28 23:52   ` Yonghong Song
2021-07-29 21:17     ` Johan Almbladh
2021-07-29 22:54       ` Yonghong Song
2021-07-28 17:04 ` [PATCH 09/14] bpf/tests: Add word-order tests for load/store of double words Johan Almbladh
2021-07-28 23:54   ` Yonghong Song
2021-07-28 17:04 ` [PATCH 10/14] bpf/tests: Add branch conversion JIT test Johan Almbladh
2021-07-28 23:58   ` Yonghong Song
2021-07-29 12:45     ` Johan Almbladh
2021-07-29 15:46       ` Yonghong Song
2021-07-29  0:55   ` Yonghong Song
2021-07-29 13:24     ` Johan Almbladh
2021-07-29 15:50       ` Yonghong Song
2021-07-28 17:04 ` [PATCH 11/14] bpf/tests: Add test for 32-bit context pointer argument passing Johan Almbladh
2021-07-29  0:09   ` Yonghong Song
2021-07-29 13:29     ` Johan Almbladh
2021-07-29 15:50       ` Yonghong Song
2021-07-28 17:05 ` [PATCH 12/14] bpf/tests: Add tests for atomic operations Johan Almbladh
2021-07-29  0:36   ` Yonghong Song
2021-07-28 17:05 ` [PATCH 13/14] bpf/tests: Add tests for BPF_CMPXCHG Johan Almbladh
2021-07-29  0:45   ` Yonghong Song
2021-07-28 17:05 ` [PATCH 14/14] bpf/tests: Add tail call test suite Johan Almbladh
2021-07-29  2:56   ` Yonghong Song
2021-07-29 20:44     ` Johan Almbladh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.