All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf 0/2] bpf: fix bug in x64 JIT
@ 2019-07-31  1:38 Alexei Starovoitov
  2019-07-31  1:38 ` [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn Alexei Starovoitov
  2019-07-31  1:38 ` [PATCH bpf 2/2] selftests/bpf: tests " Alexei Starovoitov
  0 siblings, 2 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2019-07-31  1:38 UTC (permalink / raw)
  To: davem; +Cc: daniel, netdev, bpf, kernel-team

Fix bug in computation of the jump offset to 1st instruction in BPF JIT
and add 2 tests.

Alexei Starovoitov (2):
  bpf: fix x64 JIT code generation for jmp to 1st insn
  selftests/bpf: tests for jmp to 1st insn

 arch/x86/net/bpf_jit_comp.c                   |  7 +++--
 tools/testing/selftests/bpf/verifier/loops1.c | 28 +++++++++++++++++++
 2 files changed, 32 insertions(+), 3 deletions(-)

-- 
2.20.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn
  2019-07-31  1:38 [PATCH bpf 0/2] bpf: fix bug in x64 JIT Alexei Starovoitov
@ 2019-07-31  1:38 ` Alexei Starovoitov
  2019-07-31 19:35   ` Song Liu
  2019-07-31  1:38 ` [PATCH bpf 2/2] selftests/bpf: tests " Alexei Starovoitov
  1 sibling, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2019-07-31  1:38 UTC (permalink / raw)
  To: davem; +Cc: daniel, netdev, bpf, kernel-team

Introduction of bounded loops exposed old bug in x64 JIT.
JIT maintains the array of offsets to the end of all instructions to
compute jmp offsets.
addrs[0] - offset of the end of the 1st insn (that includes prologue).
addrs[1] - offset of the end of the 2nd insn.
JIT didn't keep the offset of the beginning of the 1st insn,
since classic BPF didn't have backward jumps and valid extended BPF
couldn't have a branch to 1st insn, because it didn't allow loops.
With bounded loops it's possible to construct a valid program that
jumps backwards to the 1st insn.
Fix JIT by computing:
addrs[0] - offset of the end of prologue == start of the 1st insn.
addrs[1] - offset of the end of 1st insn.

Reported-by: syzbot+35101610ff3e83119b1b@syzkaller.appspotmail.com
Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index eaaed5bfc4a4..a56c95805732 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -390,8 +390,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 
 	emit_prologue(&prog, bpf_prog->aux->stack_depth,
 		      bpf_prog_was_classic(bpf_prog));
+	addrs[0] = prog - temp;
 
-	for (i = 0; i < insn_cnt; i++, insn++) {
+	for (i = 1; i <= insn_cnt; i++, insn++) {
 		const s32 imm32 = insn->imm;
 		u32 dst_reg = insn->dst_reg;
 		u32 src_reg = insn->src_reg;
@@ -1105,7 +1106,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		extra_pass = true;
 		goto skip_init_addrs;
 	}
-	addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
+	addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
 	if (!addrs) {
 		prog = orig_prog;
 		goto out_addrs;
@@ -1115,7 +1116,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	 * Before first pass, make a rough estimation of addrs[]
 	 * each BPF instruction is translated to less than 64 bytes
 	 */
-	for (proglen = 0, i = 0; i < prog->len; i++) {
+	for (proglen = 0, i = 0; i <= prog->len; i++) {
 		proglen += 64;
 		addrs[i] = proglen;
 	}
-- 
2.20.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf 2/2] selftests/bpf: tests for jmp to 1st insn
  2019-07-31  1:38 [PATCH bpf 0/2] bpf: fix bug in x64 JIT Alexei Starovoitov
  2019-07-31  1:38 ` [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn Alexei Starovoitov
@ 2019-07-31  1:38 ` Alexei Starovoitov
  2019-07-31 19:38   ` Song Liu
  1 sibling, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2019-07-31  1:38 UTC (permalink / raw)
  To: davem; +Cc: daniel, netdev, bpf, kernel-team

Add 2 tests that check JIT code generation to jumps to 1st insn.
1st test is similar to syzbot reproducer.
The backwards branch is never taken at runtime.
2nd test has branch to 1st insn that executes.
The test is written as two bpf functions, since it's not possible
to construct valid single bpf program that jumps to 1st insn.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/verifier/loops1.c | 28 +++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/tools/testing/selftests/bpf/verifier/loops1.c b/tools/testing/selftests/bpf/verifier/loops1.c
index 5e980a5ab69d..1fc4e61e9f9f 100644
--- a/tools/testing/selftests/bpf/verifier/loops1.c
+++ b/tools/testing/selftests/bpf/verifier/loops1.c
@@ -159,3 +159,31 @@
 	.errstr = "loop detected",
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
 },
+{
+	"not-taken loop with back jump to 1st insn",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, 123),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, -2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 123,
+},
+{
+	"taken loop with back jump to 1st insn",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_1, 10),
+	BPF_MOV64_IMM(BPF_REG_2, 0),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_EXIT_INSN(),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
+	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
+	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -3),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 55,
+},
-- 
2.20.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn
  2019-07-31  1:38 ` [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn Alexei Starovoitov
@ 2019-07-31 19:35   ` Song Liu
  2019-08-01  3:43     ` Alexei Starovoitov
  0 siblings, 1 reply; 7+ messages in thread
From: Song Liu @ 2019-07-31 19:35 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, netdev, bpf, Kernel Team



> On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Introduction of bounded loops exposed old bug in x64 JIT.
> JIT maintains the array of offsets to the end of all instructions to
> compute jmp offsets.
> addrs[0] - offset of the end of the 1st insn (that includes prologue).
> addrs[1] - offset of the end of the 2nd insn.
> JIT didn't keep the offset of the beginning of the 1st insn,
> since classic BPF didn't have backward jumps and valid extended BPF
> couldn't have a branch to 1st insn, because it didn't allow loops.
> With bounded loops it's possible to construct a valid program that
> jumps backwards to the 1st insn.
> Fix JIT by computing:
> addrs[0] - offset of the end of prologue == start of the 1st insn.
> addrs[1] - offset of the end of 1st insn.
> 
> Reported-by: syzbot+35101610ff3e83119b1b@syzkaller.appspotmail.com
> Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>

Do we need similar fix for x86_32? 

Thanks,
Song

> ---
> arch/x86/net/bpf_jit_comp.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index eaaed5bfc4a4..a56c95805732 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -390,8 +390,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
> 
> 	emit_prologue(&prog, bpf_prog->aux->stack_depth,
> 		      bpf_prog_was_classic(bpf_prog));
> +	addrs[0] = prog - temp;
> 
> -	for (i = 0; i < insn_cnt; i++, insn++) {
> +	for (i = 1; i <= insn_cnt; i++, insn++) {
> 		const s32 imm32 = insn->imm;
> 		u32 dst_reg = insn->dst_reg;
> 		u32 src_reg = insn->src_reg;
> @@ -1105,7 +1106,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> 		extra_pass = true;
> 		goto skip_init_addrs;
> 	}
> -	addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
> +	addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
> 	if (!addrs) {
> 		prog = orig_prog;
> 		goto out_addrs;
> @@ -1115,7 +1116,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> 	 * Before first pass, make a rough estimation of addrs[]
> 	 * each BPF instruction is translated to less than 64 bytes
> 	 */
> -	for (proglen = 0, i = 0; i < prog->len; i++) {
> +	for (proglen = 0, i = 0; i <= prog->len; i++) {
> 		proglen += 64;
> 		addrs[i] = proglen;
> 	}
> -- 
> 2.20.0
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 2/2] selftests/bpf: tests for jmp to 1st insn
  2019-07-31  1:38 ` [PATCH bpf 2/2] selftests/bpf: tests " Alexei Starovoitov
@ 2019-07-31 19:38   ` Song Liu
  0 siblings, 0 replies; 7+ messages in thread
From: Song Liu @ 2019-07-31 19:38 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David S. Miller, daniel, netdev, bpf, Kernel Team



> On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Add 2 tests that check JIT code generation to jumps to 1st insn.
> 1st test is similar to syzbot reproducer.
> The backwards branch is never taken at runtime.
> 2nd test has branch to 1st insn that executes.
> The test is written as two bpf functions, since it's not possible
> to construct valid single bpf program that jumps to 1st insn.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
> tools/testing/selftests/bpf/verifier/loops1.c | 28 +++++++++++++++++++
> 1 file changed, 28 insertions(+)
> 
> diff --git a/tools/testing/selftests/bpf/verifier/loops1.c b/tools/testing/selftests/bpf/verifier/loops1.c
> index 5e980a5ab69d..1fc4e61e9f9f 100644
> --- a/tools/testing/selftests/bpf/verifier/loops1.c
> +++ b/tools/testing/selftests/bpf/verifier/loops1.c
> @@ -159,3 +159,31 @@
> 	.errstr = "loop detected",
> 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
> },
> +{
> +	"not-taken loop with back jump to 1st insn",
> +	.insns = {
> +	BPF_MOV64_IMM(BPF_REG_0, 123),
> +	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, -2),
> +	BPF_EXIT_INSN(),
> +	},
> +	.result = ACCEPT,
> +	.prog_type = BPF_PROG_TYPE_XDP,
> +	.retval = 123,
> +},
> +{
> +	"taken loop with back jump to 1st insn",
> +	.insns = {
> +	BPF_MOV64_IMM(BPF_REG_1, 10),
> +	BPF_MOV64_IMM(BPF_REG_2, 0),
> +	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
> +	BPF_EXIT_INSN(),
> +	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
> +	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
> +	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -3),
> +	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
> +	BPF_EXIT_INSN(),
> +	},
> +	.result = ACCEPT,
> +	.prog_type = BPF_PROG_TYPE_XDP,
> +	.retval = 55,
> +},
> -- 
> 2.20.0
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn
  2019-07-31 19:35   ` Song Liu
@ 2019-08-01  3:43     ` Alexei Starovoitov
  2019-08-01 20:44       ` Alexei Starovoitov
  0 siblings, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2019-08-01  3:43 UTC (permalink / raw)
  To: Song Liu
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, netdev,
	bpf, Kernel Team

On Wed, Jul 31, 2019 at 12:36 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> >
> > Introduction of bounded loops exposed old bug in x64 JIT.
> > JIT maintains the array of offsets to the end of all instructions to
> > compute jmp offsets.
> > addrs[0] - offset of the end of the 1st insn (that includes prologue).
> > addrs[1] - offset of the end of the 2nd insn.
> > JIT didn't keep the offset of the beginning of the 1st insn,
> > since classic BPF didn't have backward jumps and valid extended BPF
> > couldn't have a branch to 1st insn, because it didn't allow loops.
> > With bounded loops it's possible to construct a valid program that
> > jumps backwards to the 1st insn.
> > Fix JIT by computing:
> > addrs[0] - offset of the end of prologue == start of the 1st insn.
> > addrs[1] - offset of the end of 1st insn.
> >
> > Reported-by: syzbot+35101610ff3e83119b1b@syzkaller.appspotmail.com
> > Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> > Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>
> Acked-by: Song Liu <songliubraving@fb.com>
>
> Do we need similar fix for x86_32?

Right. x86_32 would need similar fix.

Applied to bpf tree.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn
  2019-08-01  3:43     ` Alexei Starovoitov
@ 2019-08-01 20:44       ` Alexei Starovoitov
  0 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2019-08-01 20:44 UTC (permalink / raw)
  To: Song Liu
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, netdev,
	bpf, Kernel Team

On Wed, Jul 31, 2019 at 8:43 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Jul 31, 2019 at 12:36 PM Song Liu <songliubraving@fb.com> wrote:
> >
> >
> >
> > > On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> > >
> > > Introduction of bounded loops exposed old bug in x64 JIT.
> > > JIT maintains the array of offsets to the end of all instructions to
> > > compute jmp offsets.
> > > addrs[0] - offset of the end of the 1st insn (that includes prologue).
> > > addrs[1] - offset of the end of the 2nd insn.
> > > JIT didn't keep the offset of the beginning of the 1st insn,
> > > since classic BPF didn't have backward jumps and valid extended BPF
> > > couldn't have a branch to 1st insn, because it didn't allow loops.
> > > With bounded loops it's possible to construct a valid program that
> > > jumps backwards to the 1st insn.
> > > Fix JIT by computing:
> > > addrs[0] - offset of the end of prologue == start of the 1st insn.
> > > addrs[1] - offset of the end of 1st insn.
> > >
> > > Reported-by: syzbot+35101610ff3e83119b1b@syzkaller.appspotmail.com
> > > Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> > > Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> > > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> >
> > Acked-by: Song Liu <songliubraving@fb.com>
> >
> > Do we need similar fix for x86_32?
>
> Right. x86_32 would need similar fix.
>
> Applied to bpf tree.

Yonghong noticed that it subtly changes jited linfo.
Surprisingly perf annotated output for source code in jited bpf progs
looks exactly the same for several large bpf progs that I've looked at.
This to be investigated later.

I've applied the fix:
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a56c95805732..991549a1c5f3 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1181,7 +1181,7 @@ struct bpf_prog *bpf_int_jit_compile(struct
bpf_prog *prog)

        if (!image || !prog->is_func || extra_pass) {
                if (image)
-                       bpf_prog_fill_jited_linfo(prog, addrs);
+                       bpf_prog_fill_jited_linfo(prog, addrs + 1);
 out_addrs:
                kfree(addrs);
                kfree(jit_data);
and re-pushed bpf tree.
The new commit is here:
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git/commit/?id=7c2e988f400e83501e0a3568250780609b7c8263

Thanks Yonghong!

For bpf-next we need to figure out how to make test_btf more robust.
We can probably check first few insns for specific jited offsets,
but I don't yet see how to make it work for all archs.
And it will be annoying to keep it working with every change to jit.

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-08-01 20:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-31  1:38 [PATCH bpf 0/2] bpf: fix bug in x64 JIT Alexei Starovoitov
2019-07-31  1:38 ` [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn Alexei Starovoitov
2019-07-31 19:35   ` Song Liu
2019-08-01  3:43     ` Alexei Starovoitov
2019-08-01 20:44       ` Alexei Starovoitov
2019-07-31  1:38 ` [PATCH bpf 2/2] selftests/bpf: tests " Alexei Starovoitov
2019-07-31 19:38   ` Song Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.