bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4
@ 2023-06-29  6:37 Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns Yonghong Song
                   ` (13 more replies)
  0 siblings, 14 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

In previous discussion ([1]), it is agreed that we should introduce
cpu version 4 (llvm flag -mcpu=v4) which contains some instructions
which can simplify code, make code easier to understand, fix the
existing problem, or simply for feature completeness. More specifically,
the following new insns are proposed:
  . sign extended load
  . sign extended mov
  . bswap
  . signed div/mod
  . ja with 32-bit offset

This patch set added kernel support for insns proposed in [1] except
BPF_ST which already has full kernel support. Beside the above proposed
insns, LLVM will generate BPF_ST insn as well under -mcpu=v4 ([2]).

The patchset implements interpreter and jit support for these new
insns. It has minimum verifier support in order to pass bpf selftests.
More work will be required to cover verification and other aspects
(e.g. blinding, etc.).

I send the patch set earlier and want to get some early feedbacks
before I dig into verifier details. Patch 3 (sign-extension mov
insns) prompts me to think whether we should have unsign-extension
mov insns as well (see Patch 3 commit message for details).
The last patch contains some statistics of how many new insns are
actually generated in selftests.  Please see each individual patch
for details.

  [1] https://lore.kernel.org/bpf/4bfe98be-5333-1c7e-2f6d-42486c8ec039@meta.com/
  [2] https://reviews.llvm.org/D144829

Yonghong Song (13):
  bpf: Support new sign-extension load insns
  bpf: Add verifier support for sign-extension load insns
  bpf: Support new sign-extension mov insns
  bpf: Support new unconditional bswap instruction
  bpf: Support new signed div/mod instructions.
  bpf: Support new 32bit offset jmp instruction
  bpf: Add kernel/bpftool asm support for new instructions
  selftests/bpf: Add unit tests for new sign-extension load insns
  selftests/bpf: Add unit tests for new sign-extension mov insns
  selftests/bpf: Add unit tests for new bswap insns
  selftests/bpf: Add unit tests for new sdiv/smod insns
  selftests/bpf: Add unit tests for new gotol insn
  selftests/bpf: Add a cpuv4 test runner for cpu=v4 testing

 arch/x86/net/bpf_jit_comp.c                   | 130 ++-
 include/uapi/linux/bpf.h                      |   1 +
 kernel/bpf/core.c                             | 170 +++-
 kernel/bpf/disasm.c                           |  57 +-
 kernel/bpf/verifier.c                         | 100 ++-
 tools/include/uapi/linux/bpf.h                |   1 +
 tools/testing/selftests/bpf/Makefile          |  18 +-
 .../selftests/bpf/prog_tests/verifier.c       |  10 +
 .../selftests/bpf/progs/verifier_bswap.c      |  45 ++
 .../selftests/bpf/progs/verifier_gotol.c      |  30 +
 .../selftests/bpf/progs/verifier_lds.c        |  46 ++
 .../selftests/bpf/progs/verifier_movs.c       |  67 ++
 .../selftests/bpf/progs/verifier_sdiv.c       | 763 ++++++++++++++++++
 13 files changed, 1357 insertions(+), 81 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bswap.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_gotol.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_lds.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_movs.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_sdiv.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-07-03  0:53   ` Alexei Starovoitov
  2023-06-29  6:37 ` [RFC PATCH bpf-next 02/13] bpf: Add verifier support for " Yonghong Song
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Add interpreter/jit support for new sign-extension load insns.
Also add basic verifier support so newer instruction can be
accepted by verifier.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 arch/x86/net/bpf_jit_comp.c    | 31 ++++++++++++++++++++++++++++++-
 include/uapi/linux/bpf.h       |  1 +
 kernel/bpf/core.c              | 13 +++++++++++++
 kernel/bpf/verifier.c          |  8 ++++++--
 tools/include/uapi/linux/bpf.h |  1 +
 5 files changed, 51 insertions(+), 3 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 438adb695daa..70d6a2c289ec 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -779,6 +779,29 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
 	*pprog = prog;
 }
 
+/* LDX: dst_reg = *(s8*)(src_reg + off) */
+static void emit_lds(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
+{
+	u8 *prog = *pprog;
+
+	switch (size) {
+	case BPF_B:
+		/* Emit 'movsx rax, byte ptr [rax + off]' */
+		EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xBE);
+		break;
+	case BPF_H:
+		/* Emit 'movsx rax, word ptr [rax + off]' */
+		EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xBF);
+		break;
+	case BPF_W:
+		/* Emit 'movsx rax, dword ptr [rax+0x14]' */
+		EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x63);
+		break;
+	}
+	emit_insn_suffix(&prog, src_reg, dst_reg, off);
+	*pprog = prog;
+}
+
 /* STX: *(u8*)(dst_reg + off) = src_reg */
 static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
 {
@@ -1370,6 +1393,9 @@ st:			if (is_imm8(insn->off))
 		case BPF_LDX | BPF_PROBE_MEM | BPF_W:
 		case BPF_LDX | BPF_MEM | BPF_DW:
 		case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
+		case BPF_LDX | BPF_MEMS | BPF_B:
+		case BPF_LDX | BPF_MEMS | BPF_H:
+		case BPF_LDX | BPF_MEMS | BPF_W:
 			insn_off = insn->off;
 
 			if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
@@ -1415,7 +1441,10 @@ st:			if (is_imm8(insn->off))
 				start_of_ldx = prog;
 				end_of_jmp[-1] = start_of_ldx - end_of_jmp;
 			}
-			emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
+			if (BPF_MODE(insn->code) == BPF_MEMS)
+				emit_lds(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
+			else
+				emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
 			if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
 				struct exception_table_entry *ex;
 				u8 *_insn = image + proglen + (start_of_ldx - temp);
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 60a9d59beeab..b28109bc5c54 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -19,6 +19,7 @@
 
 /* ld/ldx fields */
 #define BPF_DW		0x18	/* double word (64-bit) */
+#define BPF_MEMS	0x80	/* load with sign extension */
 #define BPF_ATOMIC	0xc0	/* atomic memory ops - op type in immediate */
 #define BPF_XADD	0xc0	/* exclusive add - legacy name */
 
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index dc85240a0134..acb7abd2eba0 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1610,6 +1610,9 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
 	INSN_3(LDX, MEM, H),			\
 	INSN_3(LDX, MEM, W),			\
 	INSN_3(LDX, MEM, DW),			\
+	INSN_3(LDX, MEMS, B),			\
+	INSN_3(LDX, MEMS, H),			\
+	INSN_3(LDX, MEMS, W),			\
 	/*   Immediate based. */		\
 	INSN_3(LD, IMM, DW)
 
@@ -1942,6 +1945,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 	LDST(DW, u64)
 #undef LDST
 
+#define LDS(SIZEOP, SIZE)						\
+	LDX_MEMS_##SIZEOP:						\
+		DST = *(SIZE *)(unsigned long) (SRC + insn->off);	\
+		CONT;
+
+	LDS(B,   s8)
+	LDS(H,  s16)
+	LDS(W,  s32)
+#undef LDS
+
 #define ATOMIC_ALU_OP(BOP, KOP)						\
 		case BOP:						\
 			if (BPF_SIZE(insn->code) == BPF_W)		\
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 11e54dd8b6dd..212c367e2f46 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16790,7 +16790,8 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 
 	for (i = 0; i < insn_cnt; i++, insn++) {
 		if (BPF_CLASS(insn->code) == BPF_LDX &&
-		    (BPF_MODE(insn->code) != BPF_MEM || insn->imm != 0)) {
+		    ((BPF_MODE(insn->code) != BPF_MEM && BPF_MODE(insn->code) != BPF_MEMS) ||
+		    insn->imm != 0)) {
 			verbose(env, "BPF_LDX uses reserved fields\n");
 			return -EINVAL;
 		}
@@ -17488,7 +17489,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
 		if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) ||
 		    insn->code == (BPF_LDX | BPF_MEM | BPF_H) ||
 		    insn->code == (BPF_LDX | BPF_MEM | BPF_W) ||
-		    insn->code == (BPF_LDX | BPF_MEM | BPF_DW)) {
+		    insn->code == (BPF_LDX | BPF_MEM | BPF_DW) ||
+		    insn->code == (BPF_LDX | BPF_MEMS | BPF_B) ||
+		    insn->code == (BPF_LDX | BPF_MEMS | BPF_H) ||
+		    insn->code == (BPF_LDX | BPF_MEMS | BPF_W)) {
 			type = BPF_READ;
 		} else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
 			   insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 60a9d59beeab..b28109bc5c54 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -19,6 +19,7 @@
 
 /* ld/ldx fields */
 #define BPF_DW		0x18	/* double word (64-bit) */
+#define BPF_MEMS	0x80	/* load with sign extension */
 #define BPF_ATOMIC	0xc0	/* atomic memory ops - op type in immediate */
 #define BPF_XADD	0xc0	/* exclusive add - legacy name */
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 02/13] bpf: Add verifier support for sign-extension load insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 03/13] bpf: Support new sign-extension mov insns Yonghong Song
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Add sign-extension load support for map values. This is a minimum
change to make selftests passing.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/verifier.c | 38 ++++++++++++++++++++++++++------------
 1 file changed, 26 insertions(+), 12 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 212c367e2f46..6845504d42a5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6247,7 +6247,7 @@ static int check_stack_access_within_bounds(
  */
 static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
 			    int off, int bpf_size, enum bpf_access_type t,
-			    int value_regno, bool strict_alignment_once)
+			    int value_regno, bool strict_alignment_once, bool sign_ext_ld)
 {
 	struct bpf_reg_state *regs = cur_regs(env);
 	struct bpf_reg_state *reg = regs + regno;
@@ -6316,6 +6316,19 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 				__mark_reg_known(&regs[value_regno], val);
 			} else {
 				mark_reg_unknown(env, regs, value_regno);
+
+				if (sign_ext_ld) {
+					if (size == 1) {
+						regs[value_regno].smax_value = (char)INT_MAX;
+						regs[value_regno].smin_value = (char)INT_MIN;
+					} else if (size == 2) {
+						regs[value_regno].smax_value = (short)INT_MAX;
+						regs[value_regno].smin_value = (short)INT_MIN;
+					} else if (size == 4) {
+						regs[value_regno].smax_value = INT_MAX;
+						regs[value_regno].smin_value = INT_MIN;
+					}
+				}
 			}
 		}
 	} else if (base_type(reg->type) == PTR_TO_MEM) {
@@ -6477,7 +6490,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 	}
 
 	if (!err && size < BPF_REG_SIZE && value_regno >= 0 && t == BPF_READ &&
-	    regs[value_regno].type == SCALAR_VALUE) {
+	    regs[value_regno].type == SCALAR_VALUE && !sign_ext_ld) {
 		/* b/h/w load zero-extends, mark upper bits as known 0 */
 		coerce_reg_to_size(&regs[value_regno], size);
 	}
@@ -6571,17 +6584,17 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
 	 * case to simulate the register fill.
 	 */
 	err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
-			       BPF_SIZE(insn->code), BPF_READ, -1, true);
+			       BPF_SIZE(insn->code), BPF_READ, -1, true, false);
 	if (!err && load_reg >= 0)
 		err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
 				       BPF_SIZE(insn->code), BPF_READ, load_reg,
-				       true);
+				       true, false);
 	if (err)
 		return err;
 
 	/* Check whether we can write into the same memory. */
 	err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
-			       BPF_SIZE(insn->code), BPF_WRITE, -1, true);
+			       BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
 	if (err)
 		return err;
 
@@ -6827,7 +6840,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
 				return zero_size_allowed ? 0 : -EACCES;
 
 			return check_mem_access(env, env->insn_idx, regno, offset, BPF_B,
-						atype, -1, false);
+						atype, -1, false, false);
 		}
 
 		fallthrough;
@@ -7199,7 +7212,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 		/* we write BPF_DW bits (8 bytes) at a time */
 		for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
 			err = check_mem_access(env, insn_idx, regno,
-					       i, BPF_DW, BPF_WRITE, -1, false);
+					       i, BPF_DW, BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
 		}
@@ -7292,7 +7305,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
 
 		for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
 			err = check_mem_access(env, insn_idx, regno,
-					       i, BPF_DW, BPF_WRITE, -1, false);
+					       i, BPF_DW, BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
 		}
@@ -9422,7 +9435,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	 */
 	for (i = 0; i < meta.access_size; i++) {
 		err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
-				       BPF_WRITE, -1, false);
+				       BPF_WRITE, -1, false, false);
 		if (err)
 			return err;
 	}
@@ -16300,7 +16313,8 @@ static int do_check(struct bpf_verifier_env *env)
 			 */
 			err = check_mem_access(env, env->insn_idx, insn->src_reg,
 					       insn->off, BPF_SIZE(insn->code),
-					       BPF_READ, insn->dst_reg, false);
+					       BPF_READ, insn->dst_reg, false,
+					       BPF_MODE(insn->code) == BPF_MEMS);
 			if (err)
 				return err;
 
@@ -16337,7 +16351,7 @@ static int do_check(struct bpf_verifier_env *env)
 			/* check that memory (dst_reg + off) is writeable */
 			err = check_mem_access(env, env->insn_idx, insn->dst_reg,
 					       insn->off, BPF_SIZE(insn->code),
-					       BPF_WRITE, insn->src_reg, false);
+					       BPF_WRITE, insn->src_reg, false, false);
 			if (err)
 				return err;
 
@@ -16362,7 +16376,7 @@ static int do_check(struct bpf_verifier_env *env)
 			/* check that memory (dst_reg + off) is writeable */
 			err = check_mem_access(env, env->insn_idx, insn->dst_reg,
 					       insn->off, BPF_SIZE(insn->code),
-					       BPF_WRITE, -1, false);
+					       BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 03/13] bpf: Support new sign-extension mov insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 02/13] bpf: Add verifier support for " Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 04/13] bpf: Support new unconditional bswap instruction Yonghong Song
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Add interpreter/jit support for new sign-extension mov insns.
The verifier support is basic and the better register range
calculation will be done in the future revision.

NOTE: currently new insns support:
  ALU:
      dst = (s8)src
      dst = (s16)src
  ALU64:
      dst = (s8)src
      dst = (s16)src
      dst = (s32)src
while unsigned mov insns support:
  ALU:
      dst = (u32)src
  ALU64:
      dst = src
Should we support more unsigned-extension mov insns like below?
  ALU:
      dst = (u8)src
      dst = (u16)src
  ALU64:
      dst = (u8)src
      dst = (u16)src
      dst = (u32)src

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 arch/x86/net/bpf_jit_comp.c | 43 ++++++++++++++++++++++++++++++++++---
 kernel/bpf/core.c           | 28 ++++++++++++++++++++++--
 kernel/bpf/verifier.c       | 14 +++++++++++-
 3 files changed, 79 insertions(+), 6 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 70d6a2c289ec..7c85d1b01931 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -701,6 +701,38 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg)
 	*pprog = prog;
 }
 
+static void emit_movs_reg(u8 **pprog, int num_bits, bool is64, u32 dst_reg,
+			  u32 src_reg)
+{
+	u8 *prog = *pprog;
+
+	if (is64) {
+		/* movs[b,w,l]q dst, src */
+		if (num_bits == 8)
+			EMIT4(add_2mod(0x48, src_reg, dst_reg), 0x0f, 0xbe,
+			      add_2reg(0xC0, src_reg, dst_reg));
+		else if (num_bits == 16)
+			EMIT4(add_2mod(0x48, src_reg, dst_reg), 0x0f, 0xbf,
+			      add_2reg(0xC0, src_reg, dst_reg));
+		else if (num_bits == 32)
+			EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x63,
+			      add_2reg(0xC0, src_reg, dst_reg));
+	} else {
+		/* movs[b,w]l dst, src */
+		if (num_bits == 8) {
+			EMIT4(add_2mod(0x40, src_reg, dst_reg), 0x0f, 0xbe,
+			      add_2reg(0xC0, src_reg, dst_reg));
+		} else if (num_bits == 16) {
+			if (is_ereg(dst_reg) || is_ereg(src_reg))
+				EMIT1(add_2mod(0x40, src_reg, dst_reg));
+			EMIT3(add_2mod(0x0f, src_reg, dst_reg), 0xbf,
+			      add_2reg(0xC0, src_reg, dst_reg));
+		}
+	}
+
+	*pprog = prog;
+}
+
 /* Emit the suffix (ModR/M etc) for addressing *(ptr_reg + off) and val_reg */
 static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off)
 {
@@ -1051,9 +1083,14 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 
 		case BPF_ALU64 | BPF_MOV | BPF_X:
 		case BPF_ALU | BPF_MOV | BPF_X:
-			emit_mov_reg(&prog,
-				     BPF_CLASS(insn->code) == BPF_ALU64,
-				     dst_reg, src_reg);
+			if (insn->off == 0)
+				emit_mov_reg(&prog,
+					     BPF_CLASS(insn->code) == BPF_ALU64,
+					     dst_reg, src_reg);
+			else
+				emit_movs_reg(&prog, insn->off,
+					      BPF_CLASS(insn->code) == BPF_ALU64,
+					      dst_reg, src_reg);
 			break;
 
 			/* neg dst */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index acb7abd2eba0..72ee246ac3af 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -61,6 +61,7 @@
 #define AX	regs[BPF_REG_AX]
 #define ARG1	regs[BPF_REG_ARG1]
 #define CTX	regs[BPF_REG_CTX]
+#define OFF	insn->off
 #define IMM	insn->imm
 
 struct bpf_mem_alloc bpf_global_ma;
@@ -1736,13 +1737,36 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 		DST = -DST;
 		CONT;
 	ALU_MOV_X:
-		DST = (u32) SRC;
+		switch (OFF) {
+		case 0:
+			DST = (u32) SRC;
+			break;
+		case 8:
+			DST = (s8) SRC;
+			break;
+		case 16:
+			DST = (s16) SRC;
+			break;
+		}
 		CONT;
 	ALU_MOV_K:
 		DST = (u32) IMM;
 		CONT;
 	ALU64_MOV_X:
-		DST = SRC;
+		switch (OFF) {
+		case 0:
+			DST = SRC;
+			break;
+		case 8:
+			DST = (s8) SRC;
+			break;
+		case 16:
+			DST = (s16) SRC;
+			break;
+		case 32:
+			DST = (s32) SRC;
+			break;
+		}
 		CONT;
 	ALU64_MOV_K:
 		DST = IMM;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6845504d42a5..5c5b37b6b39a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12926,11 +12926,23 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	} else if (opcode == BPF_MOV) {
 
 		if (BPF_SRC(insn->code) == BPF_X) {
-			if (insn->imm != 0 || insn->off != 0) {
+			if (insn->imm != 0) {
 				verbose(env, "BPF_MOV uses reserved fields\n");
 				return -EINVAL;
 			}
 
+			if (BPF_CLASS(insn->code) == BPF_ALU) {
+				if (insn->off != 0 && insn->off != 8 && insn->off != 16) {
+					verbose(env, "BPF_MOV uses reserved fields\n");
+					return -EINVAL;
+				}
+			} else {
+				if (insn->off != 0 && insn->off != 8 && insn->off != 16 && insn->off != 32) {
+					verbose(env, "BPF_MOV uses reserved fields\n");
+					return -EINVAL;
+				}
+			}
+
 			/* check src operand */
 			err = check_reg_arg(env, insn->src_reg, SRC_OP);
 			if (err)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 04/13] bpf: Support new unconditional bswap instruction
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (2 preceding siblings ...)
  2023-06-29  6:37 ` [RFC PATCH bpf-next 03/13] bpf: Support new sign-extension mov insns Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 05/13] bpf: Support new signed div/mod instructions Yonghong Song
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

The existing 'be' and 'le' insns will do conditional bswap
depends on host endianness. This patch implements
unconditional bswap insns.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 arch/x86/net/bpf_jit_comp.c |  1 +
 kernel/bpf/core.c           | 14 ++++++++++++++
 kernel/bpf/verifier.c       |  2 +-
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c85d1b01931..6385a8d740b0 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1322,6 +1322,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			break;
 
 		case BPF_ALU | BPF_END | BPF_FROM_BE:
+		case BPF_ALU64 | BPF_END | BPF_FROM_LE:
 			switch (imm32) {
 			case 16:
 				/* Emit 'ror %ax, 8' to swap lower 2 bytes */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 72ee246ac3af..b59b41a3d07c 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1524,6 +1524,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
 	INSN_3(ALU64, DIV,  X),			\
 	INSN_3(ALU64, MOD,  X),			\
 	INSN_2(ALU64, NEG),			\
+	INSN_3(ALU64, END, TO_LE),		\
 	/*   Immediate based. */		\
 	INSN_3(ALU64, ADD,  K),			\
 	INSN_3(ALU64, SUB,  K),			\
@@ -1845,6 +1846,19 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 			break;
 		}
 		CONT;
+	ALU64_END_TO_LE:
+		switch (IMM) {
+		case 16:
+			DST = (__force u16) __swab16(DST);
+			break;
+		case 32:
+			DST = (__force u32) __swab32(DST);
+			break;
+		case 64:
+			DST = (__force u64) __swab64(DST);
+			break;
+		}
+		CONT;
 
 	/* CALL */
 	JMP_CALL:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5c5b37b6b39a..d515bfee1a8f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12901,7 +12901,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		} else {
 			if (insn->src_reg != BPF_REG_0 || insn->off != 0 ||
 			    (insn->imm != 16 && insn->imm != 32 && insn->imm != 64) ||
-			    BPF_CLASS(insn->code) == BPF_ALU64) {
+			    (BPF_CLASS(insn->code) == BPF_ALU64 && BPF_SRC(insn->code) != BPF_K)) {
 				verbose(env, "BPF_END uses reserved fields\n");
 				return -EINVAL;
 			}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 05/13] bpf: Support new signed div/mod instructions.
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (3 preceding siblings ...)
  2023-06-29  6:37 ` [RFC PATCH bpf-next 04/13] bpf: Support new unconditional bswap instruction Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 06/13] bpf: Support new 32bit offset jmp instruction Yonghong Song
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Add interpreter/jit support for new signed div/mod instructions.
Also add basic verifier support to ensure new insns get
accepted.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 arch/x86/net/bpf_jit_comp.c | 27 +++++++----
 kernel/bpf/core.c           | 96 ++++++++++++++++++++++++++++++-------
 kernel/bpf/verifier.c       |  6 ++-
 3 files changed, 103 insertions(+), 26 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 6385a8d740b0..0c8d881f3ada 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1194,15 +1194,26 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 				/* mov rax, dst_reg */
 				emit_mov_reg(&prog, is64, BPF_REG_0, dst_reg);
 
-			/*
-			 * xor edx, edx
-			 * equivalent to 'xor rdx, rdx', but one byte less
-			 */
-			EMIT2(0x31, 0xd2);
+			if (insn->off == 0) {
+				/*
+				 * xor edx, edx
+				 * equivalent to 'xor rdx, rdx', but one byte less
+				 */
+				EMIT2(0x31, 0xd2);
 
-			/* div src_reg */
-			maybe_emit_1mod(&prog, src_reg, is64);
-			EMIT2(0xF7, add_1reg(0xF0, src_reg));
+				/* div src_reg */
+				maybe_emit_1mod(&prog, src_reg, is64);
+				EMIT2(0xF7, add_1reg(0xF0, src_reg));
+			} else {
+				if (BPF_CLASS(insn->code) == BPF_ALU)
+					EMIT1(0x99); /* cltd */
+				else
+					EMIT2(0x48, 0x99); /* cqto */
+
+				/* idiv src_reg */
+				maybe_emit_1mod(&prog, src_reg, is64);
+				EMIT2(0xF7, add_1reg(0xF8, src_reg));
+			}
 
 			if (BPF_OP(insn->code) == BPF_MOD &&
 			    dst_reg != BPF_REG_3)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index b59b41a3d07c..279d095ca1d1 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1789,36 +1789,100 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 		(*(s64 *) &DST) >>= IMM;
 		CONT;
 	ALU64_MOD_X:
-		div64_u64_rem(DST, SRC, &AX);
-		DST = AX;
+		switch (OFF) {
+		case 0:
+			div64_u64_rem(DST, SRC, &AX);
+			DST = AX;
+			break;
+		case 1:
+			AX = div64_s64(DST, SRC);
+			DST = DST - AX * SRC;
+			break;
+		}
 		CONT;
 	ALU_MOD_X:
-		AX = (u32) DST;
-		DST = do_div(AX, (u32) SRC);
+		switch (OFF) {
+		case 0:
+			AX = (u32) DST;
+			DST = do_div(AX, (u32) SRC);
+			break;
+		case 1:
+			AX = (s32) DST;
+			DST = do_div(AX, (s32) SRC);
+			break;
+		}
 		CONT;
 	ALU64_MOD_K:
-		div64_u64_rem(DST, IMM, &AX);
-		DST = AX;
+		switch (OFF) {
+		case 0:
+			div64_u64_rem(DST, IMM, &AX);
+			DST = AX;
+			break;
+		case 1:
+			AX = div64_s64(DST, IMM);
+			DST = DST - AX * IMM;
+			break;
+		}
 		CONT;
 	ALU_MOD_K:
-		AX = (u32) DST;
-		DST = do_div(AX, (u32) IMM);
+		switch (OFF) {
+		case 0:
+			AX = (u32) DST;
+			DST = do_div(AX, (u32) IMM);
+			break;
+		case 1:
+			AX = (s32) DST;
+			DST = do_div(AX, (s32) IMM);
+			break;
+		}
 		CONT;
 	ALU64_DIV_X:
-		DST = div64_u64(DST, SRC);
+		switch (OFF) {
+		case 0:
+			DST = div64_u64(DST, SRC);
+			break;
+		case 1:
+			DST = div64_s64(DST, SRC);
+			break;
+		}
 		CONT;
 	ALU_DIV_X:
-		AX = (u32) DST;
-		do_div(AX, (u32) SRC);
-		DST = (u32) AX;
+		switch (OFF) {
+		case 0:
+			AX = (u32) DST;
+			do_div(AX, (u32) SRC);
+			DST = (u32) AX;
+			break;
+		case 1:
+			AX = (s32) DST;
+			do_div(AX, (s32) SRC);
+			DST = (s32) AX;
+			break;
+		}
 		CONT;
 	ALU64_DIV_K:
-		DST = div64_u64(DST, IMM);
+		switch (OFF) {
+		case 0:
+			DST = div64_u64(DST, IMM);
+			break;
+		case 1:
+			DST = div64_s64(DST, IMM);
+			break;
+		}
 		CONT;
 	ALU_DIV_K:
-		AX = (u32) DST;
-		do_div(AX, (u32) IMM);
-		DST = (u32) AX;
+		switch (OFF) {
+		case 0:
+			AX = (u32) DST;
+			do_div(AX, (u32) IMM);
+			DST = (u32) AX;
+			break;
+		case 1:
+			AX = (s32) DST;
+			do_div(AX, (s32) IMM);
+			DST = (s32) AX;
+			break;
+		}
 		CONT;
 	ALU_END_TO_BE:
 		switch (IMM) {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d515bfee1a8f..cc14d2ac3c5a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13029,7 +13029,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	} else {	/* all other ALU ops: and, sub, xor, add, ... */
 
 		if (BPF_SRC(insn->code) == BPF_X) {
-			if (insn->imm != 0 || insn->off != 0) {
+			if (insn->imm != 0 || insn->off > 1 ||
+			    (insn->off == 1 && opcode != BPF_MOD && opcode != BPF_DIV)) {
 				verbose(env, "BPF_ALU uses reserved fields\n");
 				return -EINVAL;
 			}
@@ -13038,7 +13039,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 			if (err)
 				return err;
 		} else {
-			if (insn->src_reg != BPF_REG_0 || insn->off != 0) {
+			if (insn->src_reg != BPF_REG_0 || insn->off > 1 ||
+			    (insn->off == 1 && opcode != BPF_MOD && opcode != BPF_DIV)) {
 				verbose(env, "BPF_ALU uses reserved fields\n");
 				return -EINVAL;
 			}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 06/13] bpf: Support new 32bit offset jmp instruction
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (4 preceding siblings ...)
  2023-06-29  6:37 ` [RFC PATCH bpf-next 05/13] bpf: Support new signed div/mod instructions Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-06-29  6:37 ` [RFC PATCH bpf-next 07/13] bpf: Add kernel/bpftool asm support for new instructions Yonghong Song
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Add interpreter/jit/verifier support for 32bit offset jmp instruction.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 arch/x86/net/bpf_jit_comp.c | 28 ++++++++++++++++++----------
 kernel/bpf/core.c           | 19 ++++++++++++++++---
 kernel/bpf/verifier.c       | 32 ++++++++++++++++++++++----------
 3 files changed, 56 insertions(+), 23 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 0c8d881f3ada..885820dcaf0e 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1808,16 +1808,24 @@ st:			if (is_imm8(insn->off))
 			break;
 
 		case BPF_JMP | BPF_JA:
-			if (insn->off == -1)
-				/* -1 jmp instructions will always jump
-				 * backwards two bytes. Explicitly handling
-				 * this case avoids wasting too many passes
-				 * when there are long sequences of replaced
-				 * dead code.
-				 */
-				jmp_offset = -2;
-			else
-				jmp_offset = addrs[i + insn->off] - addrs[i];
+		case BPF_JMP32 | BPF_JA:
+			if (BPF_CLASS(insn->code) == BPF_JMP) {
+				if (insn->off == -1)
+					/* -1 jmp instructions will always jump
+					 * backwards two bytes. Explicitly handling
+					 * this case avoids wasting too many passes
+					 * when there are long sequences of replaced
+					 * dead code.
+					 */
+					jmp_offset = -2;
+				else
+					jmp_offset = addrs[i + insn->off] - addrs[i];
+			} else {
+				if (insn->imm == -1)
+					jmp_offset = -2;
+				else
+					jmp_offset = addrs[i + insn->imm] - addrs[i];
+			}
 
 			if (!jmp_offset) {
 				/*
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 279d095ca1d1..288e49129018 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -373,7 +373,12 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
 {
 	const s32 off_min = S16_MIN, off_max = S16_MAX;
 	s32 delta = end_new - end_old;
-	s32 off = insn->off;
+	s32 off;
+
+	if (insn->code == (BPF_JMP32 | BPF_JA))
+		off = insn->imm;
+	else
+		off = insn->off;
 
 	if (curr < pos && curr + off + 1 >= end_old)
 		off += delta;
@@ -381,8 +386,12 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
 		off -= delta;
 	if (off < off_min || off > off_max)
 		return -ERANGE;
-	if (!probe_pass)
-		insn->off = off;
+	if (!probe_pass) {
+		if (insn->code == (BPF_JMP32 | BPF_JA))
+			insn->imm = off;
+		else
+			insn->off = off;
+	}
 	return 0;
 }
 
@@ -1593,6 +1602,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
 	INSN_3(JMP, JSLE, K),			\
 	INSN_3(JMP, JSET, K),			\
 	INSN_2(JMP, JA),			\
+	INSN_2(JMP32, JA),			\
 	/* Store instructions. */		\
 	/*   Register based. */			\
 	INSN_3(STX, MEM,  B),			\
@@ -1972,6 +1982,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 	JMP_JA:
 		insn += insn->off;
 		CONT;
+	JMP32_JA:
+		insn += insn->imm;
+		CONT;
 	JMP_EXIT:
 		return BPF_R0;
 	/* JMP */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cc14d2ac3c5a..28832ae9e6ca 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2854,7 +2854,10 @@ static int check_subprogs(struct bpf_verifier_env *env)
 			goto next;
 		if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)
 			goto next;
-		off = i + insn[i].off + 1;
+		if (code == (BPF_JMP32 | BPF_JA))
+			off = i + insn[i].imm + 1;
+		else
+			off = i + insn[i].off + 1;
 		if (off < subprog_start || off >= subprog_end) {
 			verbose(env, "jump out of range from insn %d to %d\n", i, off);
 			return -EINVAL;
@@ -2866,6 +2869,7 @@ static int check_subprogs(struct bpf_verifier_env *env)
 			 * or unconditional jump back
 			 */
 			if (code != (BPF_JMP | BPF_EXIT) &&
+			    code != (BPF_JMP32 | BPF_JA) &&
 			    code != (BPF_JMP | BPF_JA)) {
 				verbose(env, "last insn is not an exit or jmp\n");
 				return -EINVAL;
@@ -14584,7 +14588,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
 static int visit_insn(int t, struct bpf_verifier_env *env)
 {
 	struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t];
-	int ret;
+	int ret, off;
 
 	if (bpf_pseudo_func(insn))
 		return visit_func_call_insn(t, insns, env, true);
@@ -14632,14 +14636,19 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
 		if (BPF_SRC(insn->code) != BPF_K)
 			return -EINVAL;
 
+		if (BPF_CLASS(insn->code) == BPF_JMP)
+			off = insn->off;
+		else
+			off = insn->imm;
+
 		/* unconditional jump with single edge */
-		ret = push_insn(t, t + insn->off + 1, FALLTHROUGH, env,
+		ret = push_insn(t, t + off + 1, FALLTHROUGH, env,
 				true);
 		if (ret)
 			return ret;
 
-		mark_prune_point(env, t + insn->off + 1);
-		mark_jmp_point(env, t + insn->off + 1);
+		mark_prune_point(env, t + off + 1);
+		mark_jmp_point(env, t + off + 1);
 
 		return ret;
 
@@ -16435,15 +16444,18 @@ static int do_check(struct bpf_verifier_env *env)
 				mark_reg_scratched(env, BPF_REG_0);
 			} else if (opcode == BPF_JA) {
 				if (BPF_SRC(insn->code) != BPF_K ||
-				    insn->imm != 0 ||
 				    insn->src_reg != BPF_REG_0 ||
 				    insn->dst_reg != BPF_REG_0 ||
-				    class == BPF_JMP32) {
+				    (class == BPF_JMP && insn->imm != 0) ||
+				    (class == BPF_JMP32 && insn->off != 0)) {
 					verbose(env, "BPF_JA uses reserved fields\n");
 					return -EINVAL;
 				}
 
-				env->insn_idx += insn->off + 1;
+				if (class == BPF_JMP)
+					env->insn_idx += insn->off + 1;
+				else
+					env->insn_idx += insn->imm + 1;
 				continue;
 
 			} else if (opcode == BPF_EXIT) {
@@ -17290,13 +17302,13 @@ static bool insn_is_cond_jump(u8 code)
 {
 	u8 op;
 
+	op = BPF_OP(code);
 	if (BPF_CLASS(code) == BPF_JMP32)
-		return true;
+		return op != BPF_JA;
 
 	if (BPF_CLASS(code) != BPF_JMP)
 		return false;
 
-	op = BPF_OP(code);
 	return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 07/13] bpf: Add kernel/bpftool asm support for new instructions
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (5 preceding siblings ...)
  2023-06-29  6:37 ` [RFC PATCH bpf-next 06/13] bpf: Support new 32bit offset jmp instruction Yonghong Song
@ 2023-06-29  6:37 ` Yonghong Song
  2023-06-29  6:38 ` [RFC PATCH bpf-next 08/13] selftests/bpf: Add unit tests for new sign-extension load insns Yonghong Song
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:37 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Add asm support for new instructions so kernel verifier and bpftool
xlated insn dumps can have proper asm syntax for new instructions.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/disasm.c | 57 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 51 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index 7b4afb7d96db..cc46e708df5e 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -87,6 +87,17 @@ const char *const bpf_alu_string[16] = {
 	[BPF_END >> 4]  = "endian",
 };
 
+const char *const bpf_alu_sign_string[16] = {
+	[BPF_DIV >> 4]  = "s/=",
+	[BPF_MOD >> 4]  = "s%=",
+};
+
+const char *const bpf_movs_string[4] = {
+	[0] = "(s8)",
+	[1] = "(s16)",
+	[3] = "(s32)",
+};
+
 static const char *const bpf_atomic_alu_string[16] = {
 	[BPF_ADD >> 4]  = "add",
 	[BPF_AND >> 4]  = "and",
@@ -101,6 +112,12 @@ static const char *const bpf_ldst_string[] = {
 	[BPF_DW >> 3] = "u64",
 };
 
+static const char *const bpf_lds_string[] = {
+	[BPF_W >> 3]  = "s32",
+	[BPF_H >> 3]  = "s16",
+	[BPF_B >> 3]  = "s8",
+};
+
 static const char *const bpf_jmp_string[16] = {
 	[BPF_JA >> 4]   = "jmp",
 	[BPF_JEQ >> 4]  = "==",
@@ -128,6 +145,26 @@ static void print_bpf_end_insn(bpf_insn_print_t verbose,
 		insn->imm, insn->dst_reg);
 }
 
+static void print_bpf_bswap_insn(bpf_insn_print_t verbose,
+			       void *private_data,
+			       const struct bpf_insn *insn)
+{
+	verbose(private_data, "(%02x) r%d = bswap%d r%d\n",
+		insn->code, insn->dst_reg,
+		insn->imm, insn->dst_reg);
+}
+
+static bool is_sdiv_smod(const struct bpf_insn *insn) {
+	u8 op = BPF_OP(insn->code);
+	u16 off = insn->off;
+
+	return (op == BPF_DIV || op == BPF_MOD) && off == 1;
+}
+
+static bool is_movs(const struct bpf_insn *insn) {
+	return BPF_OP(insn->code) == BPF_MOV && insn->off != 0;
+}
+
 void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 		    const struct bpf_insn *insn,
 		    bool allow_ptr_leaks)
@@ -138,7 +175,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 	if (class == BPF_ALU || class == BPF_ALU64) {
 		if (BPF_OP(insn->code) == BPF_END) {
 			if (class == BPF_ALU64)
-				verbose(cbs->private_data, "BUG_alu64_%02x\n", insn->code);
+				print_bpf_bswap_insn(verbose, cbs->private_data, insn);
 			else
 				print_bpf_end_insn(verbose, cbs->private_data, insn);
 		} else if (BPF_OP(insn->code) == BPF_NEG) {
@@ -147,17 +184,20 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 				insn->dst_reg, class == BPF_ALU ? 'w' : 'r',
 				insn->dst_reg);
 		} else if (BPF_SRC(insn->code) == BPF_X) {
-			verbose(cbs->private_data, "(%02x) %c%d %s %c%d\n",
+			verbose(cbs->private_data, "(%02x) %c%d %s %s%c%d\n",
 				insn->code, class == BPF_ALU ? 'w' : 'r',
 				insn->dst_reg,
-				bpf_alu_string[BPF_OP(insn->code) >> 4],
+				is_sdiv_smod(insn) ? bpf_alu_sign_string[BPF_OP(insn->code) >> 4]
+						   : bpf_alu_string[BPF_OP(insn->code) >> 4],
+				is_movs(insn) ? bpf_movs_string[(insn->off >> 3) - 1] : "",
 				class == BPF_ALU ? 'w' : 'r',
 				insn->src_reg);
 		} else {
 			verbose(cbs->private_data, "(%02x) %c%d %s %d\n",
 				insn->code, class == BPF_ALU ? 'w' : 'r',
 				insn->dst_reg,
-				bpf_alu_string[BPF_OP(insn->code) >> 4],
+				is_sdiv_smod(insn) ? bpf_alu_sign_string[BPF_OP(insn->code) >> 4]
+						   : bpf_alu_string[BPF_OP(insn->code) >> 4],
 				insn->imm);
 		}
 	} else if (class == BPF_STX) {
@@ -218,13 +258,15 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 			verbose(cbs->private_data, "BUG_st_%02x\n", insn->code);
 		}
 	} else if (class == BPF_LDX) {
-		if (BPF_MODE(insn->code) != BPF_MEM) {
+		if (BPF_MODE(insn->code) != BPF_MEM && BPF_MODE(insn->code) != BPF_MEMS) {
 			verbose(cbs->private_data, "BUG_ldx_%02x\n", insn->code);
 			return;
 		}
 		verbose(cbs->private_data, "(%02x) r%d = *(%s *)(r%d %+d)\n",
 			insn->code, insn->dst_reg,
-			bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+			BPF_MODE(insn->code) == BPF_MEM ?
+				 bpf_ldst_string[BPF_SIZE(insn->code) >> 3] :
+				 bpf_lds_string[BPF_SIZE(insn->code) >> 3],
 			insn->src_reg, insn->off);
 	} else if (class == BPF_LD) {
 		if (BPF_MODE(insn->code) == BPF_ABS) {
@@ -279,6 +321,9 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 		} else if (insn->code == (BPF_JMP | BPF_JA)) {
 			verbose(cbs->private_data, "(%02x) goto pc%+d\n",
 				insn->code, insn->off);
+		} else if (insn->code == (BPF_JMP32 | BPF_JA)) {
+			verbose(cbs->private_data, "(%02x) gotol pc%+d\n",
+				insn->code, insn->imm);
 		} else if (insn->code == (BPF_JMP | BPF_EXIT)) {
 			verbose(cbs->private_data, "(%02x) exit\n", insn->code);
 		} else if (BPF_SRC(insn->code) == BPF_X) {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 08/13] selftests/bpf: Add unit tests for new sign-extension load insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (6 preceding siblings ...)
  2023-06-29  6:37 ` [RFC PATCH bpf-next 07/13] bpf: Add kernel/bpftool asm support for new instructions Yonghong Song
@ 2023-06-29  6:38 ` Yonghong Song
  2023-06-29  6:38 ` [RFC PATCH bpf-next 09/13] selftests/bpf: Add unit tests for new sign-extension mov insns Yonghong Song
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:38 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 +
 .../selftests/bpf/progs/verifier_lds.c        | 46 +++++++++++++++++++
 2 files changed, 48 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_lds.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 070a13833c3f..bc05391174c7 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -31,6 +31,7 @@
 #include "verifier_int_ptr.skel.h"
 #include "verifier_jeq_infer_not_null.skel.h"
 #include "verifier_ld_ind.skel.h"
+#include "verifier_lds.skel.h"
 #include "verifier_leak_ptr.skel.h"
 #include "verifier_loops1.skel.h"
 #include "verifier_lwt.skel.h"
@@ -132,6 +133,7 @@ void test_verifier_helper_value_access(void)  { RUN(verifier_helper_value_access
 void test_verifier_int_ptr(void)              { RUN(verifier_int_ptr); }
 void test_verifier_jeq_infer_not_null(void)   { RUN(verifier_jeq_infer_not_null); }
 void test_verifier_ld_ind(void)               { RUN(verifier_ld_ind); }
+void test_verifier_lds(void)                  { RUN(verifier_lds); }
 void test_verifier_leak_ptr(void)             { RUN(verifier_leak_ptr); }
 void test_verifier_loops1(void)               { RUN(verifier_loops1); }
 void test_verifier_lwt(void)                  { RUN(verifier_lwt); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_lds.c b/tools/testing/selftests/bpf/progs/verifier_lds.c
new file mode 100644
index 000000000000..85f3030e0cfd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_lds.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("socket")
+__description("LDS, S8")
+__success __success_unpriv __retval(-2)
+__naked void lds_s8(void)
+{
+	asm volatile ("					\
+	r1 = 0x3fe;					\
+	*(u64 *)(r10 - 8) = r1;				\
+	r0 = *(s8 *)(r10 - 8);				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("LDS, S16")
+__success __success_unpriv __retval(-2)
+__naked void lds_s16(void)
+{
+	asm volatile ("					\
+	r1 = 0x3fffe;					\
+	*(u64 *)(r10 - 8) = r1;				\
+	r0 = *(s16 *)(r10 - 8);				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("LDS, S32")
+__success __success_unpriv __retval(-2)
+__naked void lds_s32(void)
+{
+	asm volatile ("					\
+	r1 = 0xfffffffe;				\
+	*(u64 *)(r10 - 8) = r1;				\
+	r0 = *(s32 *)(r10 - 8);				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 09/13] selftests/bpf: Add unit tests for new sign-extension mov insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (7 preceding siblings ...)
  2023-06-29  6:38 ` [RFC PATCH bpf-next 08/13] selftests/bpf: Add unit tests for new sign-extension load insns Yonghong Song
@ 2023-06-29  6:38 ` Yonghong Song
  2023-06-29  6:38 ` [RFC PATCH bpf-next 10/13] selftests/bpf: Add unit tests for new bswap insns Yonghong Song
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:38 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 +
 .../selftests/bpf/progs/verifier_movs.c       | 67 +++++++++++++++++++
 2 files changed, 69 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_movs.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index bc05391174c7..a4d9fc57aff2 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -41,6 +41,7 @@
 #include "verifier_map_ret_val.skel.h"
 #include "verifier_masking.skel.h"
 #include "verifier_meta_access.skel.h"
+#include "verifier_movs.skel.h"
 #include "verifier_netfilter_ctx.skel.h"
 #include "verifier_netfilter_retcode.skel.h"
 #include "verifier_prevent_map_lookup.skel.h"
@@ -143,6 +144,7 @@ void test_verifier_map_ptr_mixing(void)       { RUN(verifier_map_ptr_mixing); }
 void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
 void test_verifier_meta_access(void)          { RUN(verifier_meta_access); }
+void test_verifier_movs(void)                 { RUN(verifier_movs); }
 void test_verifier_netfilter_ctx(void)        { RUN(verifier_netfilter_ctx); }
 void test_verifier_netfilter_retcode(void)    { RUN(verifier_netfilter_retcode); }
 void test_verifier_prevent_map_lookup(void)   { RUN(verifier_prevent_map_lookup); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_movs.c b/tools/testing/selftests/bpf/progs/verifier_movs.c
new file mode 100644
index 000000000000..575d2a708a01
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_movs.c
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("socket")
+__description("MOV32S, S8")
+__success __success_unpriv __retval(0x23)
+__naked void mov32s_s8(void)
+{
+	asm volatile ("					\
+	w0 = 0xff23;					\
+	w0 = (s8)w0;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("MOV32S, S16")
+__success __success_unpriv __retval(0xFFFFff23)
+__naked void mov32s_s16(void)
+{
+	asm volatile ("					\
+	w0 = 0xff23;					\
+	w0 = (s16)w0;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("MOV64S, S8")
+__success __success_unpriv __retval(-2)
+__naked void mov64s_s8(void)
+{
+	asm volatile ("					\
+	r0 = 0x1fe;					\
+	r0 = (s8)r0;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("MOV64S, S16")
+__success __success_unpriv __retval(0xf23)
+__naked void mov64s_s16(void)
+{
+	asm volatile ("					\
+	r0 = 0xf0f23;					\
+	r0 = (s16)r0;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("MOV64S, S32")
+__success __success_unpriv __retval(-2)
+__naked void mov64s_s32(void)
+{
+	asm volatile ("					\
+	r0 = 0xfffffffe;				\
+	r0 = (s32)r0;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 10/13] selftests/bpf: Add unit tests for new bswap insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (8 preceding siblings ...)
  2023-06-29  6:38 ` [RFC PATCH bpf-next 09/13] selftests/bpf: Add unit tests for new sign-extension mov insns Yonghong Song
@ 2023-06-29  6:38 ` Yonghong Song
  2023-06-29  6:38 ` [RFC PATCH bpf-next 11/13] selftests/bpf: Add unit tests for new sdiv/smod insns Yonghong Song
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:38 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 +
 .../selftests/bpf/progs/verifier_bswap.c      | 45 +++++++++++++++++++
 2 files changed, 47 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bswap.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index a4d9fc57aff2..63112b3755cb 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -11,6 +11,7 @@
 #include "verifier_bounds_deduction_non_const.skel.h"
 #include "verifier_bounds_mix_sign_unsign.skel.h"
 #include "verifier_bpf_get_stack.skel.h"
+#include "verifier_bswap.skel.h"
 #include "verifier_btf_ctx_access.skel.h"
 #include "verifier_cfg.skel.h"
 #include "verifier_cgroup_inv_retcode.skel.h"
@@ -114,6 +115,7 @@ void test_verifier_bounds_deduction(void)     { RUN(verifier_bounds_deduction);
 void test_verifier_bounds_deduction_non_const(void)     { RUN(verifier_bounds_deduction_non_const); }
 void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); }
 void test_verifier_bpf_get_stack(void)        { RUN(verifier_bpf_get_stack); }
+void test_verifier_bswap(void)                { RUN(verifier_bswap); }
 void test_verifier_btf_ctx_access(void)       { RUN(verifier_btf_ctx_access); }
 void test_verifier_cfg(void)                  { RUN(verifier_cfg); }
 void test_verifier_cgroup_inv_retcode(void)   { RUN(verifier_cgroup_inv_retcode); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_bswap.c b/tools/testing/selftests/bpf/progs/verifier_bswap.c
new file mode 100644
index 000000000000..b85474a46296
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_bswap.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("socket")
+__description("BSWAP, 16")
+__success __success_unpriv __retval(0x23ff)
+__naked void bswap_16(void)
+{
+	asm volatile ("					\
+	r0 = 0xff23;					\
+	r0 = bswap16 r0;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("BSWAP, 32")
+__success __success_unpriv __retval(0x23ff0000)
+__naked void bswap_32(void)
+{
+	asm volatile ("					\
+	r0 = 0xff23;					\
+	r0 = bswap32 r0;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("BSWAP, 64")
+__success __success_unpriv __retval(0x34ff12ff)
+__naked void bswap_64(void)
+{
+	asm volatile ("					\
+	r0 = %[u64_val] ll;					\
+	r0 = bswap64 r0;				\
+	exit;						\
+"	:
+	: [u64_val]"i"(0xff12ff34ff56ff78ull)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 11/13] selftests/bpf: Add unit tests for new sdiv/smod insns
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (9 preceding siblings ...)
  2023-06-29  6:38 ` [RFC PATCH bpf-next 10/13] selftests/bpf: Add unit tests for new bswap insns Yonghong Song
@ 2023-06-29  6:38 ` Yonghong Song
  2023-06-29  6:38 ` [RFC PATCH bpf-next 12/13] selftests/bpf: Add unit tests for new gotol insn Yonghong Song
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:38 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_sdiv.c       | 763 ++++++++++++++++++
 2 files changed, 765 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_sdiv.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 63112b3755cb..b2c041a4bc70 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -54,6 +54,7 @@
 #include "verifier_ringbuf.skel.h"
 #include "verifier_runtime_jit.skel.h"
 #include "verifier_scalar_ids.skel.h"
+#include "verifier_sdiv.skel.h"
 #include "verifier_search_pruning.skel.h"
 #include "verifier_sock.skel.h"
 #include "verifier_spill_fill.skel.h"
@@ -158,6 +159,7 @@ void test_verifier_regalloc(void)             { RUN(verifier_regalloc); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
 void test_verifier_runtime_jit(void)          { RUN(verifier_runtime_jit); }
 void test_verifier_scalar_ids(void)           { RUN(verifier_scalar_ids); }
+void test_verifier_sdiv(void)                 { RUN(verifier_sdiv); }
 void test_verifier_search_pruning(void)       { RUN(verifier_search_pruning); }
 void test_verifier_sock(void)                 { RUN(verifier_sock); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_sdiv.c b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
new file mode 100644
index 000000000000..d92098d670ef
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
@@ -0,0 +1,763 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 1")
+__success __success_unpriv __retval(-20)
+__naked void sdiv32_non_zero_imm_1(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w0 s/= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 2")
+__success __success_unpriv __retval(-20)
+__naked void sdiv32_non_zero_imm_2(void)
+{
+	asm volatile ("					\
+	w0 = 41;					\
+	w0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 3")
+__success __success_unpriv __retval(20)
+__naked void sdiv32_non_zero_imm_3(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 4")
+__success __success_unpriv __retval(-21)
+__naked void sdiv32_non_zero_imm_4(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w0 s/= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 5")
+__success __success_unpriv __retval(-21)
+__naked void sdiv32_non_zero_imm_5(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 6")
+__success __success_unpriv __retval(21)
+__naked void sdiv32_non_zero_imm_6(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 7")
+__success __success_unpriv __retval(21)
+__naked void sdiv32_non_zero_imm_7(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w0 s/= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero imm divisor, check 8")
+__success __success_unpriv __retval(20)
+__naked void sdiv32_non_zero_imm_8(void)
+{
+	asm volatile ("					\
+	w0 = 41;					\
+	w0 s/= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 1")
+__success __success_unpriv __retval(-20)
+__naked void sdiv32_non_zero_reg_1(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w1 = 2;						\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 2")
+__success __success_unpriv __retval(-20)
+__naked void sdiv32_non_zero_reg_2(void)
+{
+	asm volatile ("					\
+	w0 = 41;					\
+	w1 = -2;					\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 3")
+__success __success_unpriv __retval(20)
+__naked void sdiv32_non_zero_reg_3(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w1 = -2;					\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 4")
+__success __success_unpriv __retval(-21)
+__naked void sdiv32_non_zero_reg_4(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w1 = 2;						\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 5")
+__success __success_unpriv __retval(-21)
+__naked void sdiv32_non_zero_reg_5(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w1 = -2;					\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 6")
+__success __success_unpriv __retval(21)
+__naked void sdiv32_non_zero_reg_6(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w1 = -2;					\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 7")
+__success __success_unpriv __retval(21)
+__naked void sdiv32_non_zero_reg_7(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w1 = 2;						\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, non-zero reg divisor, check 8")
+__success __success_unpriv __retval(20)
+__naked void sdiv32_non_zero_reg_8(void)
+{
+	asm volatile ("					\
+	w0 = 41;					\
+	w1 = 2;						\
+	w0 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero imm divisor, check 1")
+__success __success_unpriv __retval(-20)
+__naked void sdiv64_non_zero_imm_1(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r0 s/= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero imm divisor, check 2")
+__success __success_unpriv __retval(-20)
+__naked void sdiv64_non_zero_imm_2(void)
+{
+	asm volatile ("					\
+	r0 = 41;					\
+	r0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero imm divisor, check 3")
+__success __success_unpriv __retval(20)
+__naked void sdiv64_non_zero_imm_3(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero imm divisor, check 4")
+__success __success_unpriv __retval(-21)
+__naked void sdiv64_non_zero_imm_4(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r0 s/= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero imm divisor, check 5")
+__success __success_unpriv __retval(-21)
+__naked void sdiv64_non_zero_imm_5(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero imm divisor, check 6")
+__success __success_unpriv __retval(21)
+__naked void sdiv64_non_zero_imm_6(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r0 s/= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero reg divisor, check 1")
+__success __success_unpriv __retval(-20)
+__naked void sdiv64_non_zero_reg_1(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r1 = 2;						\
+	r0 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero reg divisor, check 2")
+__success __success_unpriv __retval(-20)
+__naked void sdiv64_non_zero_reg_2(void)
+{
+	asm volatile ("					\
+	r0 = 41;					\
+	r1 = -2;					\
+	r0 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero reg divisor, check 3")
+__success __success_unpriv __retval(20)
+__naked void sdiv64_non_zero_reg_3(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r1 = -2;					\
+	r0 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero reg divisor, check 4")
+__success __success_unpriv __retval(-21)
+__naked void sdiv64_non_zero_reg_4(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r1 = 2;						\
+	r0 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero reg divisor, check 5")
+__success __success_unpriv __retval(-21)
+__naked void sdiv64_non_zero_reg_5(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r1 = -2;					\
+	r0 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, non-zero reg divisor, check 6")
+__success __success_unpriv __retval(21)
+__naked void sdiv64_non_zero_reg_6(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r1 = -2;					\
+	r0 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero imm divisor, check 1")
+__success __success_unpriv __retval(-1)
+__naked void smod32_non_zero_imm_1(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w0 s%%= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero imm divisor, check 2")
+__success __success_unpriv __retval(1)
+__naked void smod32_non_zero_imm_2(void)
+{
+	asm volatile ("					\
+	w0 = 41;					\
+	w0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero imm divisor, check 3")
+__success __success_unpriv __retval(-1)
+__naked void smod32_non_zero_imm_3(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero imm divisor, check 4")
+__success __success_unpriv __retval(0)
+__naked void smod32_non_zero_imm_4(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w0 s%%= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero imm divisor, check 5")
+__success __success_unpriv __retval(0)
+__naked void smod32_non_zero_imm_5(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero imm divisor, check 6")
+__success __success_unpriv __retval(0)
+__naked void smod32_non_zero_imm_6(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero reg divisor, check 1")
+__success __success_unpriv __retval(-1)
+__naked void smod32_non_zero_reg_1(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w1 = 2;						\
+	w0 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero reg divisor, check 2")
+__success __success_unpriv __retval(1)
+__naked void smod32_non_zero_reg_2(void)
+{
+	asm volatile ("					\
+	w0 = 41;					\
+	w1 = -2;					\
+	w0 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero reg divisor, check 3")
+__success __success_unpriv __retval(-1)
+__naked void smod32_non_zero_reg_3(void)
+{
+	asm volatile ("					\
+	w0 = -41;					\
+	w1 = -2;					\
+	w0 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero reg divisor, check 4")
+__success __success_unpriv __retval(0)
+__naked void smod32_non_zero_reg_4(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w1 = 2;						\
+	w0 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero reg divisor, check 5")
+__success __success_unpriv __retval(0)
+__naked void smod32_non_zero_reg_5(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w1 = -2;					\
+	w0 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, non-zero reg divisor, check 6")
+__success __success_unpriv __retval(0)
+__naked void smod32_non_zero_reg_6(void)
+{
+	asm volatile ("					\
+	w0 = -42;					\
+	w1 = -2;					\
+	w0 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 1")
+__success __success_unpriv __retval(-1)
+__naked void smod64_non_zero_imm_1(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r0 s%%= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 2")
+__success __success_unpriv __retval(1)
+__naked void smod64_non_zero_imm_2(void)
+{
+	asm volatile ("					\
+	r0 = 41;					\
+	r0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 3")
+__success __success_unpriv __retval(-1)
+__naked void smod64_non_zero_imm_3(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 4")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_imm_4(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r0 s%%= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 5")
+__success __success_unpriv __retval(-0)
+__naked void smod64_non_zero_imm_5(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 6")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_imm_6(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r0 s%%= -2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 7")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_imm_7(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r0 s%%= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero imm divisor, check 8")
+__success __success_unpriv __retval(1)
+__naked void smod64_non_zero_imm_8(void)
+{
+	asm volatile ("					\
+	r0 = 41;					\
+	r0 s%%= 2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 1")
+__success __success_unpriv __retval(-1)
+__naked void smod64_non_zero_reg_1(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r1 = 2;						\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 2")
+__success __success_unpriv __retval(1)
+__naked void smod64_non_zero_reg_2(void)
+{
+	asm volatile ("					\
+	r0 = 41;					\
+	r1 = -2;					\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 3")
+__success __success_unpriv __retval(-1)
+__naked void smod64_non_zero_reg_3(void)
+{
+	asm volatile ("					\
+	r0 = -41;					\
+	r1 = -2;					\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 4")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_reg_4(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r1 = 2;						\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 5")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_reg_5(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r1 = -2;					\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 6")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_reg_6(void)
+{
+	asm volatile ("					\
+	r0 = -42;					\
+	r1 = -2;					\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 7")
+__success __success_unpriv __retval(0)
+__naked void smod64_non_zero_reg_7(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r1 = 2;						\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, non-zero reg divisor, check 8")
+__success __success_unpriv __retval(1)
+__naked void smod64_non_zero_reg_8(void)
+{
+	asm volatile ("					\
+	r0 = 41;					\
+	r1 = 2;						\
+	r0 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV32, zero divisor")
+__success __success_unpriv __retval(42)
+__naked void sdiv32_zero_divisor(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w1 = 0;						\
+	w2 = -1;					\
+	w2 s/= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SDIV64, zero divisor")
+__success __success_unpriv __retval(42)
+__naked void sdiv64_zero_divisor(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r1 = 0;						\
+	r2 = -1;					\
+	r2 s/= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD32, zero divisor")
+__success __success_unpriv __retval(42)
+__naked void smod32_zero_divisor(void)
+{
+	asm volatile ("					\
+	w0 = 42;					\
+	w1 = 0;						\
+	w2 = -1;					\
+	w2 s%%= w1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("SMOD64, zero divisor")
+__success __success_unpriv __retval(42)
+__naked void smod64_zero_divisor(void)
+{
+	asm volatile ("					\
+	r0 = 42;					\
+	r1 = 0;						\
+	r2 = -1;					\
+	r2 s%%= r1;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 12/13] selftests/bpf: Add unit tests for new gotol insn
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (10 preceding siblings ...)
  2023-06-29  6:38 ` [RFC PATCH bpf-next 11/13] selftests/bpf: Add unit tests for new sdiv/smod insns Yonghong Song
@ 2023-06-29  6:38 ` Yonghong Song
  2023-06-29  6:38 ` [RFC PATCH bpf-next 13/13] selftests/bpf: Add a cpuv4 test runner for cpu=v4 testing Yonghong Song
       [not found] ` <PH7PR21MB38786422B9929D253E279810A325A@PH7PR21MB3878.namprd21.prod.outlook.com>
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:38 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 ++
 .../selftests/bpf/progs/verifier_gotol.c      | 30 +++++++++++++++++++
 2 files changed, 32 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_gotol.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index b2c041a4bc70..d07251560ce4 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -25,6 +25,7 @@
 #include "verifier_direct_stack_access_wraparound.skel.h"
 #include "verifier_div0.skel.h"
 #include "verifier_div_overflow.skel.h"
+#include "verifier_gotol.skel.h"
 #include "verifier_helper_access_var_len.skel.h"
 #include "verifier_helper_packet_access.skel.h"
 #include "verifier_helper_restricted.skel.h"
@@ -130,6 +131,7 @@ void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_acces
 void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
 void test_verifier_div0(void)                 { RUN(verifier_div0); }
 void test_verifier_div_overflow(void)         { RUN(verifier_div_overflow); }
+void test_verifier_gotol(void)                { RUN(verifier_gotol); }
 void test_verifier_helper_access_var_len(void) { RUN(verifier_helper_access_var_len); }
 void test_verifier_helper_packet_access(void) { RUN(verifier_helper_packet_access); }
 void test_verifier_helper_restricted(void)    { RUN(verifier_helper_restricted); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_gotol.c b/tools/testing/selftests/bpf/progs/verifier_gotol.c
new file mode 100644
index 000000000000..78870ea4d468
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_gotol.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("socket")
+__description("gotol, small_imm")
+__success __success_unpriv __retval(1)
+__naked void gotol_small_imm(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 == 0 goto l0_%=;				\
+	gotol l1_%=;					\
+l2_%=:							\
+	gotol l3_%=;					\
+l1_%=:							\
+	r0 = 1;						\
+	gotol l2_%=;					\
+l0_%=:							\
+	r0 = 2;						\
+l3_%=:							\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH bpf-next 13/13] selftests/bpf: Add a cpuv4 test runner for cpu=v4 testing
  2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
                   ` (11 preceding siblings ...)
  2023-06-29  6:38 ` [RFC PATCH bpf-next 12/13] selftests/bpf: Add unit tests for new gotol insn Yonghong Song
@ 2023-06-29  6:38 ` Yonghong Song
       [not found] ` <PH7PR21MB38786422B9929D253E279810A325A@PH7PR21MB3878.namprd21.prod.outlook.com>
  13 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-06-29  6:38 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

Similar to no-alu32 runner, a cpuv4 runner is created to test
bpf programs compiled with -mcpu=v4.

The following are some num-of-insn statistics for each newer
instructions, excluding naked asm (verifier_*) tests:
   insn pattern                # of instructions
   reg = (s8)reg               4
   reg = (s16)reg              2
   reg = (s32)reg              26
   reg = *(s8 *)(reg + off)    11
   reg = *(s16 *)(reg + off)   14
   reg = *(s32 *)(reg + off)   15214
   reg = bswap16 reg           133
   reg = bswap32 reg           38
   reg = bswap64 reg           14
   reg s/= reg                 0
   reg s%= reg                 0
   gotol <offset>              58

Note that in llvm -mcpu=v4 implementation, the compiler is a little
bit conservative about generating 'gotol' insn (32-bit branch offset)
as it didn't precise count the number of insns (e.g., some insns are
debug insns, etc.). Compared to old 'goto' insn, newer 'gotol' insn
should have comparable verification states to 'goto' insn.

I did not collect verifier stats now since I have not really
started to do proper range bound estimation with these
instructions.

With current patch set, all selftests passed with -mcpu=v4
when running test_progs-cpuv4 binary.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/testing/selftests/bpf/Makefile | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 538df8fb8c42..5f901cb0df59 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -40,7 +40,7 @@ TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test
 	test_sock test_sockmap get_cgroup_id_user \
 	test_cgroup_storage \
 	test_tcpnotify_user test_sysctl \
-	test_progs-no_alu32
+	test_progs-no_alu32 test_progs-cpuv4
 
 # Also test bpf-gcc, if present
 ifneq ($(BPF_GCC),)
@@ -379,6 +379,11 @@ define CLANG_NOALU32_BPF_BUILD_RULE
 	$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
 	$(Q)$(CLANG) $3 -O2 -target bpf -c $1 -mcpu=v2 -o $2
 endef
+# Similar to CLANG_BPF_BUILD_RULE, but with cpu-v4
+define CLANG_CPUV4_BPF_BUILD_RULE
+	$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
+	$(Q)$(CLANG) $3 -O2 -target bpf -c $1 -mcpu=v4 -o $2
+endef
 # Build BPF object using GCC
 define GCC_BPF_BUILD_RULE
 	$(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
@@ -421,7 +426,7 @@ LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(ske
 # $eval()) and pass control to DEFINE_TEST_RUNNER_RULES.
 # Parameters:
 # $1 - test runner base binary name (e.g., test_progs)
-# $2 - test runner extra "flavor" (e.g., no_alu32, gcc-bpf, etc)
+# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, gcc-bpf, etc)
 define DEFINE_TEST_RUNNER
 
 TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
@@ -449,7 +454,7 @@ endef
 # Using TRUNNER_XXX variables, provided by callers of DEFINE_TEST_RUNNER and
 # set up by DEFINE_TEST_RUNNER itself, create test runner build rules with:
 # $1 - test runner base binary name (e.g., test_progs)
-# $2 - test runner extra "flavor" (e.g., no_alu32, gcc-bpf, etc)
+# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, gcc-bpf, etc)
 define DEFINE_TEST_RUNNER_RULES
 
 ifeq ($($(TRUNNER_OUTPUT)-dir),)
@@ -580,6 +585,11 @@ TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
 TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
 $(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))
 
+# Define test_progs-cpuv4 test runner.
+TRUNNER_BPF_BUILD_RULE := CLANG_CPUV4_BPF_BUILD_RULE
+TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
+$(eval $(call DEFINE_TEST_RUNNER,test_progs,cpuv4))
+
 # Define test_progs BPF-GCC-flavored test runner.
 ifneq ($(BPF_GCC),)
 TRUNNER_BPF_BUILD_RULE := GCC_BPF_BUILD_RULE
@@ -674,7 +684,7 @@ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR)	\
 	prog_tests/tests.h map_tests/tests.h verifier/tests.h		\
 	feature bpftool							\
 	$(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h	\
-			       no_alu32 bpf_gcc bpf_testmod.ko		\
+			       no_alu32 cpuv4 bpf_gcc bpf_testmod.ko	\
 			       liburandom_read.so)
 
 .PHONY: docs docs-clean
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4
       [not found] ` <PH7PR21MB38786422B9929D253E279810A325A@PH7PR21MB3878.namprd21.prod.outlook.com>
@ 2023-06-29 14:17   ` Yonghong Song
  2023-07-03 21:11     ` Daniel Xu
  0 siblings, 1 reply; 19+ messages in thread
From: Yonghong Song @ 2023-06-29 14:17 UTC (permalink / raw)
  To: Dave Thaler, Yonghong Song, bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team



On 6/29/23 6:42 AM, Dave Thaler wrote:
> Yonghong Song <yhs@fb.com> wrote:
>> In previous discussion ([1]), it is agreed that we should introduce cpu version
>> 4 (llvm flag -mcpu=v4) which contains some instructions which can simplify
>> code, make code easier to understand, fix the existing problem, or simply for
>> feature completeness. More specifically, the following new insns are
>> proposed:
> [...]
> 
> What about also updating instruction-set.rst?

Will update doc in the next revision.

> 
> Dave
>   
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns
  2023-06-29  6:37 ` [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns Yonghong Song
@ 2023-07-03  0:53   ` Alexei Starovoitov
  2023-07-03 15:29     ` Yonghong Song
  0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2023-07-03  0:53 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team

On Wed, Jun 28, 2023 at 11:37:21PM -0700, Yonghong Song wrote:
>  
> +/* LDX: dst_reg = *(s8*)(src_reg + off) */
> +static void emit_lds(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
> +{
...
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 60a9d59beeab..b28109bc5c54 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -19,6 +19,7 @@
>  
>  /* ld/ldx fields */
>  #define BPF_DW		0x18	/* double word (64-bit) */
> +#define BPF_MEMS	0x80	/* load with sign extension */

Intel assembly instruction to do sign extending mov is called 'movsx'.
Let's adopt SX suffix here and in other patches ?

s/BPF_MEMS/BPF_MEMSX/ here.
s/emit_lds/emit_ldsx/ above.

s/emit_movs_reg/emit_movsx_reg/ in patch 3.

s/bpf_movs_string/bpf_movsx_string/ in patch 7
s/bpf_lds_string/bpf_ldsx_string/ in patch 7.
s/is_movs/is_movsx/ in patch 7.

sdiv/smod can stay as-is.

Naming is hard, of course.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns
  2023-07-03  0:53   ` Alexei Starovoitov
@ 2023-07-03 15:29     ` Yonghong Song
  0 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-07-03 15:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Fangrui Song, kernel-team



On 7/2/23 5:53 PM, Alexei Starovoitov wrote:
> On Wed, Jun 28, 2023 at 11:37:21PM -0700, Yonghong Song wrote:
>>   
>> +/* LDX: dst_reg = *(s8*)(src_reg + off) */
>> +static void emit_lds(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
>> +{
> ...
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 60a9d59beeab..b28109bc5c54 100644
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -19,6 +19,7 @@
>>   
>>   /* ld/ldx fields */
>>   #define BPF_DW		0x18	/* double word (64-bit) */
>> +#define BPF_MEMS	0x80	/* load with sign extension */
> 
> Intel assembly instruction to do sign extending mov is called 'movsx'.
> Let's adopt SX suffix here and in other patches ?
> 
> s/BPF_MEMS/BPF_MEMSX/ here.
> s/emit_lds/emit_ldsx/ above.
> 
> s/emit_movs_reg/emit_movsx_reg/ in patch 3.
> 
> s/bpf_movs_string/bpf_movsx_string/ in patch 7
> s/bpf_lds_string/bpf_ldsx_string/ in patch 7.
> s/is_movs/is_movsx/ in patch 7.
> 
> sdiv/smod can stay as-is.

Sounds good to me! Will use the above suggested names
in the next revision.

> 
> Naming is hard, of course.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4
  2023-06-29 14:17   ` [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
@ 2023-07-03 21:11     ` Daniel Xu
  2023-07-03 23:36       ` Yonghong Song
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Xu @ 2023-07-03 21:11 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Dave Thaler, Yonghong Song, bpf, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Fangrui Song, kernel-team

Hi Yonghong,

On Thu, Jun 29, 2023 at 07:17:56AM -0700, Yonghong Song wrote:
> 
> 
> On 6/29/23 6:42 AM, Dave Thaler wrote:
> > Yonghong Song <yhs@fb.com> wrote:
> > > In previous discussion ([1]), it is agreed that we should introduce cpu version
> > > 4 (llvm flag -mcpu=v4) which contains some instructions which can simplify
> > > code, make code easier to understand, fix the existing problem, or simply for
> > > feature completeness. More specifically, the following new insns are
> > > proposed:
> > [...]
> > 
> > What about also updating instruction-set.rst?
> 
> Will update doc in the next revision.
> 
> > 
> > Dave
> > 
> 

I think bpf_design_QA.rst needs to be updated as well:

    Q: Why there is no BPF_SDIV for signed divide operation?
    [..]

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4
  2023-07-03 21:11     ` Daniel Xu
@ 2023-07-03 23:36       ` Yonghong Song
  0 siblings, 0 replies; 19+ messages in thread
From: Yonghong Song @ 2023-07-03 23:36 UTC (permalink / raw)
  To: Daniel Xu
  Cc: Dave Thaler, Yonghong Song, bpf, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Fangrui Song, kernel-team



On 7/3/23 2:11 PM, Daniel Xu wrote:
> Hi Yonghong,
> 
> On Thu, Jun 29, 2023 at 07:17:56AM -0700, Yonghong Song wrote:
>>
>>
>> On 6/29/23 6:42 AM, Dave Thaler wrote:
>>> Yonghong Song <yhs@fb.com> wrote:
>>>> In previous discussion ([1]), it is agreed that we should introduce cpu version
>>>> 4 (llvm flag -mcpu=v4) which contains some instructions which can simplify
>>>> code, make code easier to understand, fix the existing problem, or simply for
>>>> feature completeness. More specifically, the following new insns are
>>>> proposed:
>>> [...]
>>>
>>> What about also updating instruction-set.rst?
>>
>> Will update doc in the next revision.
>>
>>>
>>> Dave
>>>
>>
> 
> I think bpf_design_QA.rst needs to be updated as well:
> 
>      Q: Why there is no BPF_SDIV for signed divide operation?
>      [..]

Sure. Will do!

> 
> Thanks,
> Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-07-03 23:37 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-29  6:37 [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 01/13] bpf: Support new sign-extension load insns Yonghong Song
2023-07-03  0:53   ` Alexei Starovoitov
2023-07-03 15:29     ` Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 02/13] bpf: Add verifier support for " Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 03/13] bpf: Support new sign-extension mov insns Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 04/13] bpf: Support new unconditional bswap instruction Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 05/13] bpf: Support new signed div/mod instructions Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 06/13] bpf: Support new 32bit offset jmp instruction Yonghong Song
2023-06-29  6:37 ` [RFC PATCH bpf-next 07/13] bpf: Add kernel/bpftool asm support for new instructions Yonghong Song
2023-06-29  6:38 ` [RFC PATCH bpf-next 08/13] selftests/bpf: Add unit tests for new sign-extension load insns Yonghong Song
2023-06-29  6:38 ` [RFC PATCH bpf-next 09/13] selftests/bpf: Add unit tests for new sign-extension mov insns Yonghong Song
2023-06-29  6:38 ` [RFC PATCH bpf-next 10/13] selftests/bpf: Add unit tests for new bswap insns Yonghong Song
2023-06-29  6:38 ` [RFC PATCH bpf-next 11/13] selftests/bpf: Add unit tests for new sdiv/smod insns Yonghong Song
2023-06-29  6:38 ` [RFC PATCH bpf-next 12/13] selftests/bpf: Add unit tests for new gotol insn Yonghong Song
2023-06-29  6:38 ` [RFC PATCH bpf-next 13/13] selftests/bpf: Add a cpuv4 test runner for cpu=v4 testing Yonghong Song
     [not found] ` <PH7PR21MB38786422B9929D253E279810A325A@PH7PR21MB3878.namprd21.prod.outlook.com>
2023-06-29 14:17   ` [RFC PATCH bpf-next 00/13] bpf: Support new insns from cpu v4 Yonghong Song
2023-07-03 21:11     ` Daniel Xu
2023-07-03 23:36       ` Yonghong Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).