bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] bpf,x64: implement jump padding in jit
@ 2021-01-19 10:24 Gary Lin
  2021-01-19 10:24 ` [PATCH v4 1/3] bpf,x64: pad NOPs to make images converge more easily Gary Lin
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Gary Lin @ 2021-01-19 10:24 UTC (permalink / raw)
  To: netdev, bpf, Alexei Starovoitov, Daniel Borkmann
  Cc: Eric Dumazet, Andrii Nakryiko, andreas.taschner

This patch series implements jump padding to x64 jit to cover some
corner cases that used to consume more than 20 jit passes and caused
failure.

v4:
  - Add the detailed comments about the possible padding bytes
  - Add the second test case which triggers jmp_cond padding and imm32 nop
    jmp padding.
  - Add the new test case as another subprog

v3:
  - Copy the instructions of prologue separately or the size calculation
    of the first BPF instruction would include the prologue.
  - Replace WARN_ONCE() with pr_err() and EFAULT
  - Use MAX_PASSES in the for loop condition check
  - Remove the "padded" flag from x64_jit_data. For the extra pass of
    subprogs, padding is always enabled since it won't hurt the images
    that converge without padding.
v2:
  - Simplify the sample code in the commit description and provide the
    jit code
  - Check the expected padding bytes with WARN_ONCE
  - Move the 'padded' flag to 'struct x64_jit_data'
  - Remove the EXPECTED_FAIL flag from bpf_fill_maxinsns11() in test_bpf
  - Add 2 verifier tests

Gary Lin (3):
  bpf,x64: pad NOPs to make images converge more easily
  test_bpf: remove EXPECTED_FAIL flag from bpf_fill_maxinsns11
  selftests/bpf: Add verifier tests for x64 jit jump padding

 arch/x86/net/bpf_jit_comp.c                 | 140 ++++++++++++++++----
 lib/test_bpf.c                              |   7 +-
 tools/testing/selftests/bpf/test_verifier.c |  72 ++++++++++
 tools/testing/selftests/bpf/verifier/jit.c  |  24 ++++
 4 files changed, 209 insertions(+), 34 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v4 1/3] bpf,x64: pad NOPs to make images converge more easily
  2021-01-19 10:24 [PATCH v4 0/3] bpf,x64: implement jump padding in jit Gary Lin
@ 2021-01-19 10:24 ` Gary Lin
  2021-01-19 10:25 ` [PATCH v4 2/3] test_bpf: remove EXPECTED_FAIL flag from bpf_fill_maxinsns11 Gary Lin
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Gary Lin @ 2021-01-19 10:24 UTC (permalink / raw)
  To: netdev, bpf, Alexei Starovoitov, Daniel Borkmann
  Cc: Eric Dumazet, Andrii Nakryiko, andreas.taschner

The x64 bpf jit expects bpf images converge within the given passes, but
it could fail to do so with some corner cases. For example:

      l0:     ja 40
      l1:     ja 40

        [... repeated ja 40 ]

      l39:    ja 40
      l40:    ret #0

This bpf program contains 40 "ja 40" instructions which are effectively
NOPs and designed to be replaced with valid code dynamically. Ideally,
bpf jit should optimize those "ja 40" instructions out when translating
the bpf instructions into x64 machine code. However, do_jit() can only
remove one "ja 40" for offset==0 on each pass, so it requires at least
40 runs to eliminate those JMPs and exceeds the current limit of
passes(20). In the end, the program got rejected when BPF_JIT_ALWAYS_ON
is set even though it's legit as a classic socket filter.

To make bpf images more likely converge within 20 passes, this commit
pads some instructions with NOPs in the last 5 passes:

1. conditional jumps
  A possible size variance comes from the adoption of imm8 JMP. If the
  offset is imm8, we calculate the size difference of this BPF instruction
  between the previous and the current pass and fill the gap with NOPs.
  To avoid the recalculation of jump offset, those NOPs are inserted before
  the JMP code, so we have to subtract the 2 bytes of imm8 JMP when
  calculating the NOP number.

2. BPF_JA
  There are two conditions for BPF_JA.
  a.) nop jumps
    If this instruction is not optimized out in the previous pass,
    instead of removing it, we insert the equivalent size of NOPs.
  b.) label jumps
    Similar to condition jumps, we prepend NOPs right before the JMP
    code.

To make the code concise, emit_nops() is modified to use the signed len and
return the number of inserted NOPs.

For bpf-to-bpf, we always enable padding for the extra pass since there
is only one extra run and the jump padding doesn't affected the images
that converge without padding.

After applying this patch, the corner case was loaded with the following
jit code:

    flen=45 proglen=77 pass=17 image=ffffffffc03367d4 from=jump pid=10097
    JIT code: 00000000: 0f 1f 44 00 00 55 48 89 e5 53 41 55 31 c0 45 31
    JIT code: 00000010: ed 48 89 fb eb 30 eb 2e eb 2c eb 2a eb 28 eb 26
    JIT code: 00000020: eb 24 eb 22 eb 20 eb 1e eb 1c eb 1a eb 18 eb 16
    JIT code: 00000030: eb 14 eb 12 eb 10 eb 0e eb 0c eb 0a eb 08 eb 06
    JIT code: 00000040: eb 04 eb 02 66 90 31 c0 41 5d 5b c9 c3

     0: 0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
     5: 55                      push   rbp
     6: 48 89 e5                mov    rbp,rsp
     9: 53                      push   rbx
     a: 41 55                   push   r13
     c: 31 c0                   xor    eax,eax
     e: 45 31 ed                xor    r13d,r13d
    11: 48 89 fb                mov    rbx,rdi
    14: eb 30                   jmp    0x46
    16: eb 2e                   jmp    0x46
        ...
    3e: eb 06                   jmp    0x46
    40: eb 04                   jmp    0x46
    42: eb 02                   jmp    0x46
    44: 66 90                   xchg   ax,ax
    46: 31 c0                   xor    eax,eax
    48: 41 5d                   pop    r13
    4a: 5b                      pop    rbx
    4b: c9                      leave
    4c: c3                      ret

At the 16th pass, 15 jumps were already optimized out, and one jump was
replaced with NOPs at 44 and the image converged at the 17th pass.

v4:
  - Add the detailed comments about the possible padding bytes

v3:
  - Copy the instructions of prologue separately or the size calculation
    of the first BPF instruction would include the prologue.
  - Replace WARN_ONCE() with pr_err() and EFAULT
  - Use MAX_PASSES in the for loop condition check
  - Remove the "padded" flag from x64_jit_data. For the extra pass of
    subprogs, padding is always enabled since it won't hurt the images
    that converge without padding.

v2:
  - Simplify the sample code in the description and provide the jit code
  - Check the expected padding bytes with WARN_ONCE
  - Move the 'padded' flag to 'struct x64_jit_data'

Signed-off-by: Gary Lin <glin@suse.com>
---
 arch/x86/net/bpf_jit_comp.c | 140 ++++++++++++++++++++++++++++--------
 1 file changed, 112 insertions(+), 28 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 1d4d50199293..b7a2911bda77 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -869,8 +869,31 @@ static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt,
 	}
 }
 
+static int emit_nops(u8 **pprog, int len)
+{
+	u8 *prog = *pprog;
+	int i, noplen, cnt = 0;
+
+	while (len > 0) {
+		noplen = len;
+
+		if (noplen > ASM_NOP_MAX)
+			noplen = ASM_NOP_MAX;
+
+		for (i = 0; i < noplen; i++)
+			EMIT1(ideal_nops[noplen][i]);
+		len -= noplen;
+	}
+
+	*pprog = prog;
+
+	return cnt;
+}
+
+#define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
+
 static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
-		  int oldproglen, struct jit_context *ctx)
+		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
 {
 	bool tail_call_reachable = bpf_prog->aux->tail_call_reachable;
 	struct bpf_insn *insn = bpf_prog->insnsi;
@@ -880,7 +903,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 	bool seen_exit = false;
 	u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY];
 	int i, cnt = 0, excnt = 0;
-	int proglen = 0;
+	int ilen, proglen = 0;
 	u8 *prog = temp;
 	int err;
 
@@ -894,7 +917,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		      bpf_prog_was_classic(bpf_prog), tail_call_reachable,
 		      bpf_prog->aux->func_idx != 0);
 	push_callee_regs(&prog, callee_regs_used);
-	addrs[0] = prog - temp;
+
+	ilen = prog - temp;
+	if (image)
+		memcpy(image + proglen, temp, ilen);
+	proglen += ilen;
+	addrs[0] = proglen;
+	prog = temp;
 
 	for (i = 1; i <= insn_cnt; i++, insn++) {
 		const s32 imm32 = insn->imm;
@@ -903,8 +932,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		u8 b2 = 0, b3 = 0;
 		s64 jmp_offset;
 		u8 jmp_cond;
-		int ilen;
 		u8 *func;
+		int nops;
 
 		switch (insn->code) {
 			/* ALU */
@@ -1502,6 +1531,30 @@ st:			if (is_imm8(insn->off))
 			}
 			jmp_offset = addrs[i + insn->off] - addrs[i];
 			if (is_imm8(jmp_offset)) {
+				if (jmp_padding) {
+					/* To keep the jmp_offset valid, the extra bytes are
+					 * padded before the jump insn, so we substract the
+					 * 2 bytes of jmp_cond insn from INSN_SZ_DIFF.
+					 *
+					 * If the previous pass already emits an imm8
+					 * jmp_cond, then this BPF insn won't shrink, so
+					 * "nops" is 0.
+					 *
+					 * On the other hand, if the previous pass emits an
+					 * imm32 jmp_cond, the extra 4 bytes(*) is padded to
+					 * keep the image from shrinking further.
+					 *
+					 * (*) imm32 jmp_cond is 6 bytes, and imm8 jmp_cond
+					 *     is 2 bytes, so the size difference is 4 bytes.
+					 */
+					nops = INSN_SZ_DIFF - 2;
+					if (nops != 0 && nops != 4) {
+						pr_err("unexpected jmp_cond padding: %d bytes\n",
+						       nops);
+						return -EFAULT;
+					}
+					cnt += emit_nops(&prog, nops);
+				}
 				EMIT2(jmp_cond, jmp_offset);
 			} else if (is_simm32(jmp_offset)) {
 				EMIT2_off32(0x0F, jmp_cond + 0x10, jmp_offset);
@@ -1524,11 +1577,55 @@ st:			if (is_imm8(insn->off))
 			else
 				jmp_offset = addrs[i + insn->off] - addrs[i];
 
-			if (!jmp_offset)
-				/* Optimize out nop jumps */
+			if (!jmp_offset) {
+				/*
+				 * If jmp_padding is enabled, the extra nops will
+				 * be inserted. Otherwise, optimize out nop jumps.
+				 */
+				if (jmp_padding) {
+					/* There are 3 possible conditions.
+					 * (1) This BPF_JA is already optimized out in
+					 *     the previous run, so there is no need
+					 *     to pad any extra byte (0 byte).
+					 * (2) The previous pass emits an imm8 jmp,
+					 *     so we pad 2 bytes to match the previous
+					 *     insn size.
+					 * (3) Similarly, the previous pass emits an
+					 *     imm32 jmp, and 5 bytes is padded.
+					 */
+					nops = INSN_SZ_DIFF;
+					if (nops != 0 && nops != 2 && nops != 5) {
+						pr_err("unexpected nop jump padding: %d bytes\n",
+						       nops);
+						return -EFAULT;
+					}
+					cnt += emit_nops(&prog, nops);
+				}
 				break;
+			}
 emit_jmp:
 			if (is_imm8(jmp_offset)) {
+				if (jmp_padding) {
+					/* To avoid breaking jmp_offset, the extra bytes
+					 * are padded before the actual jmp insn, so
+					 * 2 bytes is substracted from INSN_SZ_DIFF.
+					 *
+					 * If the previous pass already emits an imm8
+					 * jmp, there is nothing to pad (0 byte).
+					 *
+					 * If it emits an imm32 jmp (5 bytes) previously
+					 * and now an imm8 jmp (2 bytes), then we pad
+					 * (5 - 2 = 3) bytes to stop the image from
+					 * shrinking further.
+					 */
+					nops = INSN_SZ_DIFF - 2;
+					if (nops != 0 && nops != 3) {
+						pr_err("unexpected jump padding: %d bytes\n",
+						       nops);
+						return -EFAULT;
+					}
+					cnt += emit_nops(&prog, INSN_SZ_DIFF - 2);
+				}
 				EMIT2(0xEB, jmp_offset);
 			} else if (is_simm32(jmp_offset)) {
 				EMIT1_off32(0xE9, jmp_offset);
@@ -1671,26 +1768,6 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
 	return 0;
 }
 
-static void emit_nops(u8 **pprog, unsigned int len)
-{
-	unsigned int i, noplen;
-	u8 *prog = *pprog;
-	int cnt = 0;
-
-	while (len > 0) {
-		noplen = len;
-
-		if (noplen > ASM_NOP_MAX)
-			noplen = ASM_NOP_MAX;
-
-		for (i = 0; i < noplen; i++)
-			EMIT1(ideal_nops[noplen][i]);
-		len -= noplen;
-	}
-
-	*pprog = prog;
-}
-
 static void emit_align(u8 **pprog, u32 align)
 {
 	u8 *target, *prog = *pprog;
@@ -2065,6 +2142,9 @@ struct x64_jit_data {
 	struct jit_context ctx;
 };
 
+#define MAX_PASSES 20
+#define PADDING_PASSES (MAX_PASSES - 5)
+
 struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 {
 	struct bpf_binary_header *header = NULL;
@@ -2074,6 +2154,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	struct jit_context ctx = {};
 	bool tmp_blinded = false;
 	bool extra_pass = false;
+	bool padding = false;
 	u8 *image = NULL;
 	int *addrs;
 	int pass;
@@ -2110,6 +2191,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		image = jit_data->image;
 		header = jit_data->header;
 		extra_pass = true;
+		padding = true;
 		goto skip_init_addrs;
 	}
 	addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
@@ -2135,8 +2217,10 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	 * may converge on the last pass. In such case do one more
 	 * pass to emit the final image.
 	 */
-	for (pass = 0; pass < 20 || image; pass++) {
-		proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+	for (pass = 0; pass < MAX_PASSES || image; pass++) {
+		if (!padding && pass >= PADDING_PASSES)
+			padding = true;
+		proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding);
 		if (proglen <= 0) {
 out_image:
 			image = NULL;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 2/3] test_bpf: remove EXPECTED_FAIL flag from bpf_fill_maxinsns11
  2021-01-19 10:24 [PATCH v4 0/3] bpf,x64: implement jump padding in jit Gary Lin
  2021-01-19 10:24 ` [PATCH v4 1/3] bpf,x64: pad NOPs to make images converge more easily Gary Lin
@ 2021-01-19 10:25 ` Gary Lin
  2021-01-19 10:25 ` [PATCH v4 3/3] selftests/bpf: Add verifier tests for x64 jit jump padding Gary Lin
  2021-01-19 20:54 ` [PATCH v4 0/3] bpf,x64: implement jump padding in jit Alexei Starovoitov
  3 siblings, 0 replies; 5+ messages in thread
From: Gary Lin @ 2021-01-19 10:25 UTC (permalink / raw)
  To: netdev, bpf, Alexei Starovoitov, Daniel Borkmann
  Cc: Eric Dumazet, Andrii Nakryiko, andreas.taschner

With NOPs padding, x64 jit now can handle the jump cases like
bpf_fill_maxinsns11().

Signed-off-by: Gary Lin <glin@suse.com>
---
 lib/test_bpf.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 49ec9e8d8aed..4dc4dcbecd12 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -345,7 +345,7 @@ static int __bpf_fill_ja(struct bpf_test *self, unsigned int len,
 
 static int bpf_fill_maxinsns11(struct bpf_test *self)
 {
-	/* Hits 70 passes on x86_64, so cannot get JITed there. */
+	/* Hits 70 passes on x86_64 and triggers NOPs padding. */
 	return __bpf_fill_ja(self, BPF_MAXINSNS, 68);
 }
 
@@ -5318,15 +5318,10 @@ static struct bpf_test tests[] = {
 	{
 		"BPF_MAXINSNS: Jump, gap, jump, ...",
 		{ },
-#if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_X86)
-		CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
-#else
 		CLASSIC | FLAG_NO_DATA,
-#endif
 		{ },
 		{ { 0, 0xababcbac } },
 		.fill_helper = bpf_fill_maxinsns11,
-		.expected_errcode = -ENOTSUPP,
 	},
 	{
 		"BPF_MAXINSNS: jump over MSH",
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 3/3] selftests/bpf: Add verifier tests for x64 jit jump padding
  2021-01-19 10:24 [PATCH v4 0/3] bpf,x64: implement jump padding in jit Gary Lin
  2021-01-19 10:24 ` [PATCH v4 1/3] bpf,x64: pad NOPs to make images converge more easily Gary Lin
  2021-01-19 10:25 ` [PATCH v4 2/3] test_bpf: remove EXPECTED_FAIL flag from bpf_fill_maxinsns11 Gary Lin
@ 2021-01-19 10:25 ` Gary Lin
  2021-01-19 20:54 ` [PATCH v4 0/3] bpf,x64: implement jump padding in jit Alexei Starovoitov
  3 siblings, 0 replies; 5+ messages in thread
From: Gary Lin @ 2021-01-19 10:25 UTC (permalink / raw)
  To: netdev, bpf, Alexei Starovoitov, Daniel Borkmann
  Cc: Eric Dumazet, Andrii Nakryiko, andreas.taschner

There are 3 tests added into verifier's jit tests to trigger x64
jit jump padding.

The first test can be represented as the following assembly code:

      1: bpf_call bpf_get_prandom_u32
      2: if r0 == 1 goto pc+128
      3: if r0 == 2 goto pc+128
         ...
    129: if r0 == 128 goto pc+128
    130: goto pc+128
    131: goto pc+127
         ...
    256: goto pc+2
    257: goto pc+1
    258: r0 = 1
    259: ret

We first store a random number to r0 and add the corresponding
conditional jumps (2~129) to make verifier believe that those jump
instructions from 130 to 257 are reachable. When the program is sent to
x64 jit, it starts to optimize out the NOP jumps backwards from 257.
Since there are 128 such jumps, the program easily reaches 15 passes and
triggers jump padding.

Here is the x64 jit code of the first test:

      0:    0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
      5:    66 90                   xchg   ax,ax
      7:    55                      push   rbp
      8:    48 89 e5                mov    rbp,rsp
      b:    e8 4c 90 75 e3          call   0xffffffffe375905c
     10:    48 83 f8 01             cmp    rax,0x1
     14:    0f 84 fe 04 00 00       je     0x518
     1a:    48 83 f8 02             cmp    rax,0x2
     1e:    0f 84 f9 04 00 00       je     0x51d
      ...
     f6:    48 83 f8 18             cmp    rax,0x18
     fa:    0f 84 8b 04 00 00       je     0x58b
    100:    48 83 f8 19             cmp    rax,0x19
    104:    0f 84 86 04 00 00       je     0x590
    10a:    48 83 f8 1a             cmp    rax,0x1a
    10e:    0f 84 81 04 00 00       je     0x595
      ...
    500:    0f 84 83 01 00 00       je     0x689
    506:    48 81 f8 80 00 00 00    cmp    rax,0x80
    50d:    0f 84 76 01 00 00       je     0x689
    513:    e9 71 01 00 00          jmp    0x689
    518:    e9 6c 01 00 00          jmp    0x689
      ...
    5fe:    e9 86 00 00 00          jmp    0x689
    603:    e9 81 00 00 00          jmp    0x689
    608:    0f 1f 00                nop    DWORD PTR [rax]
    60b:    eb 7c                   jmp    0x689
    60d:    eb 7a                   jmp    0x689
      ...
    683:    eb 04                   jmp    0x689
    685:    eb 02                   jmp    0x689
    687:    66 90                   xchg   ax,ax
    689:    b8 01 00 00 00          mov    eax,0x1
    68e:    c9                      leave
    68f:    c3                      ret

As expected, a 3 bytes NOPs is inserted at 608 due to the transition
from imm32 jmp to imm8 jmp. A 2 bytes NOPs is also inserted at 687 to
replace a NOP jump.

The second test case is tricky. Here is the assembly code:

       1: bpf_call bpf_get_prandom_u32
       2: if r0 == 1 goto pc+2048
       3: if r0 == 2 goto pc+2048
       ...
    2049: if r0 == 2048 goto pc+2048
    2050: goto pc+2048
    2051: goto pc+16
    2052: goto pc+15
       ...
    2064: goto pc+3
    2065: goto pc+2
    2066: goto pc+1
       ...
       [repeat "goto pc+16".."goto pc+1" 127 times]
       ...
    4099: r0 = 2
    4100: ret

There are 4 major parts of the program.
1) 1~2049: Those are instructions to make 2050~4098 reachable. Some of
           them also could generate the padding for jmp_cond.
2) 2050: This is the target instruction for the imm32 nop jmp padding.
3) 2051~4098: The repeated "goto 1~16" instructions are designed to be
              consumed by the nop jmp optimization. In the end, those
              instrucitons become 128 continuous 0 offset jmp and are
              optimized out in 1 pass, and this make insn 2050 an imm32
              nop jmp in the next pass, so that we can trigger the
              5 bytes padding.
4) 4099~4100: Those are the instructions to end the program.

The x64 jit code is like this:

       0:       0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
       5:       66 90                   xchg   ax,ax
       7:       55                      push   rbp
       8:       48 89 e5                mov    rbp,rsp
       b:       e8 bc 7b d5 d3          call   0xffffffffd3d57bcc
      10:       48 83 f8 01             cmp    rax,0x1
      14:       0f 84 7e 66 00 00       je     0x6698
      1a:       48 83 f8 02             cmp    rax,0x2
      1e:       0f 84 74 66 00 00       je     0x6698
      24:       48 83 f8 03             cmp    rax,0x3
      28:       0f 84 6a 66 00 00       je     0x6698
      2e:       48 83 f8 04             cmp    rax,0x4
      32:       0f 84 60 66 00 00       je     0x6698
      38:       48 83 f8 05             cmp    rax,0x5
      3c:       0f 84 56 66 00 00       je     0x6698
      42:       48 83 f8 06             cmp    rax,0x6
      46:       0f 84 4c 66 00 00       je     0x6698
      ...
    666c:       48 81 f8 fe 07 00 00    cmp    rax,0x7fe
    6673:       0f 1f 40 00             nop    DWORD PTR [rax+0x0]
    6677:       74 1f                   je     0x6698
    6679:       48 81 f8 ff 07 00 00    cmp    rax,0x7ff
    6680:       0f 1f 40 00             nop    DWORD PTR [rax+0x0]
    6684:       74 12                   je     0x6698
    6686:       48 81 f8 00 08 00 00    cmp    rax,0x800
    668d:       0f 1f 40 00             nop    DWORD PTR [rax+0x0]
    6691:       74 05                   je     0x6698
    6693:       0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
    6698:       b8 02 00 00 00          mov    eax,0x2
    669d:       c9                      leave
    669e:       c3                      ret

Since insn 2051~4098 are optimized out right before the padding pass,
there are several conditional jumps from the first part are replaced with
imm8 jmp_cond, and this triggers the 4 bytes padding, for example at
6673, 6680, and 668d. On the other hand, Insn 2050 is replaced with the
5 bytes nops at 6693.

The third test is to invoke the first and second tests as subprogs to test
bpf2bpf. Per the system log, there was one more jit happened with only
one pass and the same jit code was produced.

v4:
  - Add the second test case which triggers jmp_cond padding and imm32 nop
    jmp padding.
  - Add the new test case as another subprog

Signed-off-by: Gary Lin <glin@suse.com>
---
 tools/testing/selftests/bpf/test_verifier.c | 72 +++++++++++++++++++++
 tools/testing/selftests/bpf/verifier/jit.c  | 24 +++++++
 2 files changed, 96 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 777a81404fdb..e0e65ff30a2d 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -296,6 +296,78 @@ static void bpf_fill_scale(struct bpf_test *self)
 	}
 }
 
+static int bpf_fill_torturous_jumps_insn_1(struct bpf_insn *insn)
+{
+	unsigned int len = 259, hlen = 128;
+	int i;
+
+	insn[0] = BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32);
+	for (i = 1; i <= hlen; i++) {
+		insn[i]        = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, i, hlen);
+		insn[i + hlen] = BPF_JMP_A(hlen - i);
+	}
+	insn[len - 2] = BPF_MOV64_IMM(BPF_REG_0, 1);
+	insn[len - 1] = BPF_EXIT_INSN();
+
+	return len;
+}
+
+static int bpf_fill_torturous_jumps_insn_2(struct bpf_insn *insn)
+{
+	unsigned int len = 4100, jmp_off = 2048;
+	int i, j;
+
+	insn[0] = BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32);
+	for (i = 1; i <= jmp_off; i++) {
+		insn[i] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, i, jmp_off);
+	}
+	insn[i++] = BPF_JMP_A(jmp_off);
+	for (; i <= jmp_off * 2 + 1; i+=16) {
+		for (j = 0; j < 16; j++) {
+			insn[i + j] = BPF_JMP_A(16 - j - 1);
+		}
+	}
+
+	insn[len - 2] = BPF_MOV64_IMM(BPF_REG_0, 2);
+	insn[len - 1] = BPF_EXIT_INSN();
+
+	return len;
+}
+
+static void bpf_fill_torturous_jumps(struct bpf_test *self)
+{
+	struct bpf_insn *insn = self->fill_insns;
+	int i = 0;
+
+	switch (self->retval) {
+	case 1:
+		self->prog_len = bpf_fill_torturous_jumps_insn_1(insn);
+		return;
+	case 2:
+		self->prog_len = bpf_fill_torturous_jumps_insn_2(insn);
+		return;
+	case 3:
+		/* main */
+		insn[i++] = BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 4);
+		insn[i++] = BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 262);
+		insn[i++] = BPF_ST_MEM(BPF_B, BPF_REG_10, -32, 0);
+		insn[i++] = BPF_MOV64_IMM(BPF_REG_0, 3);
+		insn[i++] = BPF_EXIT_INSN();
+
+		/* subprog 1 */
+		i += bpf_fill_torturous_jumps_insn_1(insn + i);
+
+		/* subprog 2 */
+		i += bpf_fill_torturous_jumps_insn_2(insn + i);
+
+		self->prog_len = i;
+		return;
+	default:
+		self->prog_len = 0;
+		break;
+	}
+}
+
 /* BPF_SK_LOOKUP contains 13 instructions, if you need to fix up maps */
 #define BPF_SK_LOOKUP(func)						\
 	/* struct bpf_sock_tuple tuple = {} */				\
diff --git a/tools/testing/selftests/bpf/verifier/jit.c b/tools/testing/selftests/bpf/verifier/jit.c
index c33adf344fae..df215e004566 100644
--- a/tools/testing/selftests/bpf/verifier/jit.c
+++ b/tools/testing/selftests/bpf/verifier/jit.c
@@ -105,3 +105,27 @@
 	.result = ACCEPT,
 	.retval = 2,
 },
+{
+	"jit: torturous jumps, imm8 nop jmp and pure jump padding",
+	.insns = { },
+	.fill_helper = bpf_fill_torturous_jumps,
+	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	.result = ACCEPT,
+	.retval = 1,
+},
+{
+	"jit: torturous jumps, imm32 nop jmp and jmp_cond padding",
+	.insns = { },
+	.fill_helper = bpf_fill_torturous_jumps,
+	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	.result = ACCEPT,
+	.retval = 2,
+},
+{
+	"jit: torturous jumps in subprog",
+	.insns = { },
+	.fill_helper = bpf_fill_torturous_jumps,
+	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	.result = ACCEPT,
+	.retval = 3,
+},
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v4 0/3] bpf,x64: implement jump padding in jit
  2021-01-19 10:24 [PATCH v4 0/3] bpf,x64: implement jump padding in jit Gary Lin
                   ` (2 preceding siblings ...)
  2021-01-19 10:25 ` [PATCH v4 3/3] selftests/bpf: Add verifier tests for x64 jit jump padding Gary Lin
@ 2021-01-19 20:54 ` Alexei Starovoitov
  3 siblings, 0 replies; 5+ messages in thread
From: Alexei Starovoitov @ 2021-01-19 20:54 UTC (permalink / raw)
  To: Gary Lin
  Cc: Network Development, bpf, Alexei Starovoitov, Daniel Borkmann,
	Eric Dumazet, Andrii Nakryiko, andreas.taschner

On Tue, Jan 19, 2021 at 2:25 AM Gary Lin <glin@suse.com> wrote:
>
> This patch series implements jump padding to x64 jit to cover some
> corner cases that used to consume more than 20 jit passes and caused
> failure.
>
> v4:
>   - Add the detailed comments about the possible padding bytes
>   - Add the second test case which triggers jmp_cond padding and imm32 nop
>     jmp padding.
>   - Add the new test case as another subprog

Applied. Thanks a ton for all the hard work you put into it.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-01-19 20:59 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-19 10:24 [PATCH v4 0/3] bpf,x64: implement jump padding in jit Gary Lin
2021-01-19 10:24 ` [PATCH v4 1/3] bpf,x64: pad NOPs to make images converge more easily Gary Lin
2021-01-19 10:25 ` [PATCH v4 2/3] test_bpf: remove EXPECTED_FAIL flag from bpf_fill_maxinsns11 Gary Lin
2021-01-19 10:25 ` [PATCH v4 3/3] selftests/bpf: Add verifier tests for x64 jit jump padding Gary Lin
2021-01-19 20:54 ` [PATCH v4 0/3] bpf,x64: implement jump padding in jit Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).