bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs
@ 2022-11-25  6:36 Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN Hao Sun
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Hao Sun @ 2022-11-25  6:36 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
	kpsingh, sdf, haoluo, jolsa, davem, linux-kernel, Hao Sun

The verifier sometimes makes mistakes[1][2] that may be exploited to
achieve arbitrary read/write. Currently, syzbot is continuously testing
bpf, and can find memory issues in bpf syscalls, but it can hardly find
mischecking/bugs in the verifier. We need runtime checks like KASAN in
BPF programs for this. This patch series implements address sanitize
in jited BPF progs for testing purpose, so that tools like syzbot can
find interesting bugs in the verifier automatically by, if possible,
generating and executing BPF programs that bypass the verifier but have
memory issues, then triggering this sanitizing.

The idea is to dispatch read/write addr of a BPF program to the kernel
functions that are instrumented by KASAN, to achieve indirect checking.
Indirect checking is adopted because this is much simple, instrument
direct checking like compilers makes the jit much more complex. The
main step is: back up R0&R1 and store addr in R1, and then insert the
checking function before load/store insns, during bpf_misc_fixup().
The stack size of BPF progs is extended by 64 bytes in this mode, to
backup R1~R5 to make sure the checking funcs won't corrupt regs states.
An extra Kconfig option is used to enable this, so normal use case
won't be impacted at all.

Also, not all ldx/stx/st are instrumented. Insns rewrote by other fixup
or conversion passes that use BPF_REG_AX are skipped, because that
conflicts with us; insns whose access addr is specified by R10 are also
skipped because they are trivial to verify.

Patch1 sanitizes st/stx insns, and Patch2 sanitizes ldx insns, Patch3 adds
selftests for instrumentation in each possible case, and all new/existing
selftests for the verifier can pass. Also, a BPF prog that also exploits
CVE-2022-23222 to achieve OOB read is provided[3], this can be perfertly
captured with this patch series.

[1] http://bit.do/CVE-2021-3490
[2] http://bit.do/CVE-2022-23222
[3] OOB-read: https://pastebin.com/raw/Ee1Cw492

v1 -> v2:
	remove changes to JIT completely, backup regs to extended stack.

Hao Sun (3):
  bpf: Sanitize STX/ST in jited BPF progs with KASAN
  bpf: Sanitize LDX in jited BPF progs with KASAN
  selftests/bpf: Add tests for LDX/STX/ST sanitize

 kernel/bpf/Kconfig                            |  13 +
 kernel/bpf/verifier.c                         | 224 +++++++++++
 .../selftests/bpf/verifier/sanitize_st_ldx.c  | 362 ++++++++++++++++++
 3 files changed, 599 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c


base-commit: 2b3e8f6f5b939ceeb2e097339bf78ebaaf11dfe9
-- 
2.38.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH bpf-next v2 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN
  2022-11-25  6:36 [PATCH bpf-next v2 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs Hao Sun
@ 2022-11-25  6:36 ` Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 2/3] bpf: Sanitize LDX " Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 3/3] selftests/bpf: Add tests for LDX/STX/ST sanitize Hao Sun
  2 siblings, 0 replies; 4+ messages in thread
From: Hao Sun @ 2022-11-25  6:36 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
	kpsingh, sdf, haoluo, jolsa, davem, linux-kernel, Hao Sun

Make the verifier sanitize STX/ST insns in jited BPF programs
by dispatching addr to kernel functions that are instrumented
by KASAN.

Only STX/ST insns that aren't in patches added by other passes
using REG_AX or dst_reg isn't R10 are sanitized. The former
confilicts with us, the latter are trivial for the verifier to
check, skip them to reduce the footprint.

The instrumentation is conducted in bpf_misc_fixup(). During it,
R0 and R1 are backed up or exchanged with dst_reg, and the addr
to check is stored into R1. We extend stack size to backup all
the scatch regs, because we don't rely on verifier's knowledge
about the calculated stack size and liveness of each regs. And
the corresponding bpf_asan_storeN() is inserted before store.
The sanitize functions are instrumented with KASAN and they
simply write to the target addr for certain bytes, KASAN conducts
the actual checking. An extra Kconfig is used to enable this,
so normal use case won't be impacted at all.

Signed-off-by: Hao Sun <sunhao.th@gmail.com>
---
 kernel/bpf/Kconfig    |  13 ++++
 kernel/bpf/verifier.c | 134 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 147 insertions(+)

diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index 2dfe1079f772..d41e1d2d26f1 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -99,4 +99,17 @@ config BPF_LSM
 
 	  If you are unsure how to answer this question, answer N.
 
+config BPF_PROG_KASAN
+	bool "Enable BPF Program Address Sanitize"
+	depends on BPF_JIT_ALWAYS_ON
+	depends on KASAN
+	help
+	  Enables instrumentation on LDX/STX/ST insn to capture memory
+	  access errors in BPF programs missed by the verifier.
+
+	  The actual check is conducted by KASAN, this feature presents
+	  certain overhead, and should be used mainly by testing purpose.
+
+	  If you are unsure how to answer this question, answer N.
+
 endmenu # "BPF subsystem"
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6599d25dae38..5519c24c5bd4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15327,6 +15327,25 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	return 0;
 }
 
+#ifdef CONFIG_BPF_PROG_KASAN
+
+/* Those are functions instrumented with KASAN for actual sanitizing. */
+
+#define BPF_ASAN_STORE(n)                         \
+	notrace u64 bpf_asan_store##n(u##n *addr) \
+	{                                         \
+		u##n ret = *addr;                 \
+		*addr = ret;                      \
+		return ret;                       \
+	}
+
+BPF_ASAN_STORE(8);
+BPF_ASAN_STORE(16);
+BPF_ASAN_STORE(32);
+BPF_ASAN_STORE(64);
+
+#endif
+
 /* Do various post-verification rewrites in a single program pass.
  * These rewrites simplify JIT and interpreter implementations.
  */
@@ -15340,7 +15359,12 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 	const int insn_cnt = prog->len;
 	const struct bpf_map_ops *ops;
 	struct bpf_insn_aux_data *aux;
+#ifndef CONFIG_BPF_PROG_KASAN
 	struct bpf_insn insn_buf[16];
+#else
+	struct bpf_insn insn_buf[32];
+	bool in_patch_use_ax = false;
+#endif
 	struct bpf_prog *new_prog;
 	struct bpf_map *map_ptr;
 	int i, ret, cnt, delta = 0;
@@ -15460,6 +15484,112 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
+#ifdef CONFIG_BPF_PROG_KASAN
+
+/* With CONFIG_BPF_PROG_KASAN, we extend prog stack to MAX_BPF_STACK + 64
+ * to backup scratch regs before calling the sanitize functions, because
+ * we don't rely on verifier's knowledge about calculated stack size or
+ * liveness of each reg.
+ */
+#define __BACKUP_REG(n) \
+	*patch++ = BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_##n, -(MAX_BPF_STACK + 8 * n))
+#define BACKUP_SCRATCH_REGS	\
+	__BACKUP_REG(1);	\
+	__BACKUP_REG(2);	\
+	__BACKUP_REG(3);	\
+	__BACKUP_REG(4);	\
+	__BACKUP_REG(5)
+
+#define __RESTORE_REG(n) \
+	*patch++ = BPF_LDX_MEM(BPF_DW, BPF_REG_##n, BPF_REG_10, -(MAX_BPF_STACK + 8 * n))
+#define RESTORE_SCRATCH_REGS	\
+	__RESTORE_REG(1);	\
+	__RESTORE_REG(2);	\
+	__RESTORE_REG(3);	\
+	__RESTORE_REG(4);	\
+	__RESTORE_REG(5)
+
+		/* Patches that use REG_AX confilict with us, skip it.
+		 * This starts with first use of REG_AX, stops only when
+		 * we see next ldx/stx/st insn with valid aux information.
+		 */
+		aux = &env->insn_aux_data[i + delta];
+		if (in_patch_use_ax && (int)aux->ptr_type != 0)
+			in_patch_use_ax = false;
+		if (insn->dst_reg == BPF_REG_AX || insn->src_reg == BPF_REG_AX)
+			in_patch_use_ax = true;
+
+		/* Sanitize ST/STX operation. */
+		if (BPF_CLASS(insn->code) == BPF_ST ||
+		    BPF_CLASS(insn->code) == BPF_STX) {
+			struct bpf_insn sanitize_fn;
+			struct bpf_insn *patch = &insn_buf[0];
+
+			/* Skip st/stx to R10, they're trivial to check. */
+			if (in_patch_use_ax || insn->dst_reg == BPF_REG_10 ||
+				BPF_MODE(insn->code) == BPF_NOSPEC)
+				continue;
+
+			switch (BPF_SIZE(insn->code)) {
+			case BPF_B:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store8);
+				break;
+			case BPF_H:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store16);
+				break;
+			case BPF_W:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store32);
+				break;
+			case BPF_DW:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store64);
+				break;
+			}
+
+			/* Backup R0 and R1, store `dst + off` to R1, invoke the
+			 * sanitize fn, and then restore each reg.
+			 */
+			if (insn->dst_reg == BPF_REG_1) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0);
+			} else if (insn->dst_reg == BPF_REG_0) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0);
+			} else {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, insn->dst_reg);
+				*patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_0);
+			}
+			if (insn->off != 0)
+				*patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, insn->off);
+			BACKUP_SCRATCH_REGS;
+			/* Call sanitize fn, R1~R5 are saved to stack during jit. */
+			*patch++ = sanitize_fn;
+			RESTORE_SCRATCH_REGS;
+			if (insn->off != 0)
+				*patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -insn->off);
+			if (insn->dst_reg == BPF_REG_1) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX);
+			} else if (insn->dst_reg == BPF_REG_0) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+			} else {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, insn->dst_reg);
+				*patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+			}
+			*patch++ = *insn;
+			cnt = patch - insn_buf;
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+			if (!new_prog)
+				return -ENOMEM;
+
+			delta += cnt - 1;
+			env->prog = prog = new_prog;
+			insn = new_prog->insnsi + i + delta;
+			continue;
+		}
+#endif
+
 		if (insn->code != (BPF_JMP | BPF_CALL))
 			continue;
 		if (insn->src_reg == BPF_PSEUDO_CALL)
@@ -15852,6 +15982,10 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		}
 	}
 
+#ifdef CONFIG_BPF_PROG_KASAN
+	prog->aux->stack_depth = MAX_BPF_STACK + 64;
+#endif
+
 	sort_kfunc_descs_by_imm(env->prog);
 
 	return 0;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH bpf-next v2 2/3] bpf: Sanitize LDX in jited BPF progs with KASAN
  2022-11-25  6:36 [PATCH bpf-next v2 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN Hao Sun
@ 2022-11-25  6:36 ` Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 3/3] selftests/bpf: Add tests for LDX/STX/ST sanitize Hao Sun
  2 siblings, 0 replies; 4+ messages in thread
From: Hao Sun @ 2022-11-25  6:36 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
	kpsingh, sdf, haoluo, jolsa, davem, linux-kernel, Hao Sun

Make the verifier sanitize LDX insns in jited BPF programs. The
dst_reg and AX are free here, different insns that backup R0&R1
are inserted based on their relationships with dst_reg and src,
all the scratch regs are backed up to extended stack space before
calling the checking functions. Finally, the checking funcs are
inserted, and regs are restored.

Signed-off-by: Hao Sun <sunhao.th@gmail.com>
---
 kernel/bpf/verifier.c | 90 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5519c24c5bd4..4e253fc20bf2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15344,6 +15344,17 @@ BPF_ASAN_STORE(16);
 BPF_ASAN_STORE(32);
 BPF_ASAN_STORE(64);
 
+#define BPF_ASAN_LOAD(n)                         \
+	notrace u64 bpf_asan_load##n(u##n *addr) \
+	{                                        \
+		return *addr;                    \
+	}
+
+BPF_ASAN_LOAD(8);
+BPF_ASAN_LOAD(16);
+BPF_ASAN_LOAD(32);
+BPF_ASAN_LOAD(64);
+
 #endif
 
 /* Do various post-verification rewrites in a single program pass.
@@ -15588,6 +15599,85 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			insn = new_prog->insnsi + i + delta;
 			continue;
 		}
+
+		/* Sanitize LDX operation*/
+		if (BPF_CLASS(insn->code) == BPF_LDX) {
+			struct bpf_insn sanitize_fn;
+			struct bpf_insn *patch = &insn_buf[0];
+			bool dst_is_r0 = insn->dst_reg == BPF_REG_0;
+			bool dst_is_r1 = insn->dst_reg == BPF_REG_1;
+
+			if (in_patch_use_ax || insn->src_reg == BPF_REG_10)
+				continue;
+
+			switch (BPF_SIZE(insn->code)) {
+			case BPF_B:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_load8);
+				break;
+			case BPF_H:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_load16);
+				break;
+			case BPF_W:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_load32);
+				break;
+			case BPF_DW:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_load64);
+				break;
+			}
+
+			/* Backup R0 and R1, REG_AX and dst_reg are free. */
+			if (insn->src_reg == BPF_REG_1) {
+				if (!dst_is_r0)
+					*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0);
+			} else if (insn->src_reg == BPF_REG_0) {
+				if (!dst_is_r1)
+					*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0);
+			} else if (!dst_is_r1) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, insn->src_reg);
+				if (!dst_is_r0)
+					*patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_0);
+			} else {
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, insn->src_reg);
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0);
+			}
+			if (insn->off != 0)
+				*patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, insn->off);
+			BACKUP_SCRATCH_REGS;
+			/* Invoke sanitize fn, R1~R5 are stored to stack during jit. */
+			*patch++ = sanitize_fn;
+			RESTORE_SCRATCH_REGS;
+			if (insn->off != 0)
+				*patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -insn->off);
+			if (insn->src_reg == BPF_REG_1) {
+				if (!dst_is_r0)
+					*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX);
+			} else if (insn->src_reg == BPF_REG_0) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
+				if (!dst_is_r1)
+					*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+			} else if (!dst_is_r1) {
+				if (!dst_is_r0)
+					*patch++ = BPF_MOV64_REG(BPF_REG_0, insn->dst_reg);
+				if (insn->src_reg == insn->dst_reg)
+					*patch++ = BPF_MOV64_REG(insn->src_reg, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+			} else {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX);
+			}
+			*patch++ = *insn;
+			cnt = patch - insn_buf;
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+			if (!new_prog)
+				return -ENOMEM;
+
+			delta += cnt - 1;
+			env->prog = prog = new_prog;
+			insn = new_prog->insnsi + i + delta;
+			continue;
+		}
 #endif
 
 		if (insn->code != (BPF_JMP | BPF_CALL))
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH bpf-next v2 3/3] selftests/bpf: Add tests for LDX/STX/ST sanitize
  2022-11-25  6:36 [PATCH bpf-next v2 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN Hao Sun
  2022-11-25  6:36 ` [PATCH bpf-next v2 2/3] bpf: Sanitize LDX " Hao Sun
@ 2022-11-25  6:36 ` Hao Sun
  2 siblings, 0 replies; 4+ messages in thread
From: Hao Sun @ 2022-11-25  6:36 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, john.fastabend, andrii, martin.lau, song, yhs,
	kpsingh, sdf, haoluo, jolsa, davem, linux-kernel, Hao Sun

Add tests for LDX/STX/ST instrumentation in each possible case.
Four cases for STX/ST, include dst_reg equals to R0, R1, R10,
other regs, respectively, ten cases for LDX. All new/existing
selftests can pass.

A slab-out-of-bounds read report is also availble, which is
achieved by exploiting CVE-2022-23222 and can be reproduced
in Linux v5.10: https://pastebin.com/raw/Ee1Cw492.

Signed-off-by: Hao Sun <sunhao.th@gmail.com>
---
 .../selftests/bpf/verifier/sanitize_st_ldx.c  | 362 ++++++++++++++++++
 1 file changed, 362 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c

diff --git a/tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c b/tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c
new file mode 100644
index 000000000000..1db0d1794f29
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c
@@ -0,0 +1,362 @@
+#ifdef CONFIG_BPF_PROG_KASAN
+
+#define __BACKUP_REG(n) \
+	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_##n, INSN_OFF_MASK)
+#define BACKUP_SCRATCH_REGS                                                 \
+	__BACKUP_REG(1), __BACKUP_REG(2), __BACKUP_REG(3), __BACKUP_REG(4), \
+		__BACKUP_REG(5)
+
+#define __RESTORE_REG(n) \
+	BPF_LDX_MEM(BPF_DW, BPF_REG_##n, BPF_REG_10, INSN_OFF_MASK)
+#define RESTORE_SCRATCH_REGS                                  \
+	__RESTORE_REG(1), __RESTORE_REG(2), __RESTORE_REG(3), \
+		__RESTORE_REG(4), __RESTORE_REG(5)
+
+{
+	"sanitize stx: dst is R1",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG),
+	BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1),
+	},
+},
+{
+	"sanitize stx: dst is R0",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1),
+	},
+},
+{
+	"sanitize stx: dst is R10",
+	.insns = {
+	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.unexpected_insns = {
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	},
+},
+{
+	"sanitize stx: dst is other regs",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1),
+	},
+},
+{
+	"sanitize ldx: src is R1, dst is R0",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
+	},
+},
+{
+	"sanitize ldx: src is R1, dst is R1",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, -8),
+	},
+},
+{
+	"sanitize ldx: src is R1, dst is other regs",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_1, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -8),
+	},
+},
+{
+	"sanitize ldx: src is R0, dst is R1",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
+	},
+},
+{
+	"sanitize ldx: src is R0, dst is R0",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -8),
+	},
+},
+{
+	"sanitize ldx: src is R0, dst is other regs",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_0, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, -8),
+	},
+},
+{
+	"sanitize ldx: src is other regs, dst is R0",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8),
+	},
+},
+{
+	"sanitize ldx: src is other regs, dst is R1",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -8),
+	},
+},
+{
+	"sanitize ldx: src is other regs, dst is self",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
+	},
+},
+{
+	"sanitize ldx: src is other regs, dst is other regs",
+	.insns = {
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 1),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, -8),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 1, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 2),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_3),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 1,
+	.expected_insns = {
+	BPF_MOV64_REG(MAX_BPF_REG, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
+	BACKUP_SCRATCH_REGS,
+	BPF_EMIT_CALL(INSN_IMM_MASK),
+	RESTORE_SCRATCH_REGS,
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_3),
+	BPF_MOV64_REG(BPF_REG_1, MAX_BPF_REG),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, -8),
+	},
+},
+#endif /* CONFIG_BPF_PROG_KASAN */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-11-25  6:37 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-25  6:36 [PATCH bpf-next v2 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs Hao Sun
2022-11-25  6:36 ` [PATCH bpf-next v2 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN Hao Sun
2022-11-25  6:36 ` [PATCH bpf-next v2 2/3] bpf: Sanitize LDX " Hao Sun
2022-11-25  6:36 ` [PATCH bpf-next v2 3/3] selftests/bpf: Add tests for LDX/STX/ST sanitize Hao Sun

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).