bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] powerpc/bpf: Various fixes
@ 2021-10-01 21:14 Naveen N. Rao
  2021-10-01 21:14 ` [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range Naveen N. Rao
                   ` (9 more replies)
  0 siblings, 10 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Various fixes to the eBPF JIT for powerpc, thanks to some new tests 
added by Johan. This series fixes all failures in test_bpf on powerpc64.  
There are still some failures on powerpc32 to be looked into.

- Naveen


Naveen N. Rao (8):
  powerpc/lib: Add helper to check if offset is within conditional
    branch range
  powerpc/bpf: Validate branch ranges
  powerpc/bpf: Handle large branch ranges with BPF_EXIT
  powerpc/bpf: Fix BPF_MOD when imm == 1
  powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06
  powerpc/security: Add a helper to query stf_barrier type
  powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC

Ravi Bangoria (1):
  powerpc/bpf: Remove unused SEEN_STACK

 arch/powerpc/include/asm/code-patching.h     |   1 +
 arch/powerpc/include/asm/ppc-opcode.h        |   1 +
 arch/powerpc/include/asm/security_features.h |   5 +
 arch/powerpc/kernel/security.c               |   5 +
 arch/powerpc/lib/code-patching.c             |   7 +-
 arch/powerpc/net/bpf_jit.h                   |  39 ++++---
 arch/powerpc/net/bpf_jit64.h                 |   8 +-
 arch/powerpc/net/bpf_jit_comp.c              |  28 ++++-
 arch/powerpc/net/bpf_jit_comp32.c            |  10 +-
 arch/powerpc/net/bpf_jit_comp64.c            | 113 ++++++++++++++-----
 10 files changed, 167 insertions(+), 50 deletions(-)


base-commit: 044c2d99d9f43c6d6fde8bed00672517dd9a5a57
-- 
2.33.0


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 21:37   ` Song Liu
  2021-10-03  7:50   ` Christophe Leroy
  2021-10-01 21:14 ` [PATCH 2/9] powerpc/bpf: Validate branch ranges Naveen N. Rao
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Add a helper to check if a given offset is within the branch range for a
powerpc conditional branch instruction, and update some sites to use the
new helper.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/code-patching.h | 1 +
 arch/powerpc/lib/code-patching.c         | 7 ++++++-
 arch/powerpc/net/bpf_jit.h               | 7 +------
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index a95f63788c6b14..4ba834599c4d4c 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -23,6 +23,7 @@
 #define BRANCH_ABSOLUTE	0x2
 
 bool is_offset_in_branch_range(long offset);
+bool is_offset_in_cond_branch_range(long offset);
 int create_branch(struct ppc_inst *instr, const u32 *addr,
 		  unsigned long target, int flags);
 int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index f9a3019e37b43c..e2342b9a1ab9c9 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -228,6 +228,11 @@ bool is_offset_in_branch_range(long offset)
 	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
 }
 
+bool is_offset_in_cond_branch_range(long offset)
+{
+	return offset >= -0x8000 && offset <= 0x7FFF && !(offset & 0x3);
+}
+
 /*
  * Helper to check if a given instruction is a conditional branch
  * Derived from the conditional checks in analyse_instr()
@@ -280,7 +285,7 @@ int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
 		offset = offset - (unsigned long)addr;
 
 	/* Check we can represent the target in the instruction format */
-	if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
+	if (!is_offset_in_cond_branch_range(offset))
 		return 1;
 
 	/* Mask out the flags and target, so they don't step on each other. */
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 99fad093f43ec1..935ea95b66359e 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -78,11 +78,6 @@
 #define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
 #endif
 
-static inline bool is_nearbranch(int offset)
-{
-	return (offset < 32768) && (offset >= -32768);
-}
-
 /*
  * The fly in the ointment of code size changing from pass to pass is
  * avoided by padding the short branch case with a NOP.	 If code size differs
@@ -91,7 +86,7 @@ static inline bool is_nearbranch(int offset)
  * state.
  */
 #define PPC_BCC(cond, dest)	do {					      \
-		if (is_nearbranch((dest) - (ctx->idx * 4))) {		      \
+		if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) {	\
 			PPC_BCC_SHORT(cond, dest);			      \
 			EMIT(PPC_RAW_NOP());				      \
 		} else {						      \
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 2/9] powerpc/bpf: Validate branch ranges
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
  2021-10-01 21:14 ` [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 21:45   ` Song Liu
                     ` (2 more replies)
  2021-10-01 21:14 ` [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK Naveen N. Rao
                   ` (7 subsequent siblings)
  9 siblings, 3 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Add checks to ensure that we never emit branch instructions with
truncated branch offsets.

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/net/bpf_jit.h        | 26 ++++++++++++++++++++------
 arch/powerpc/net/bpf_jit_comp.c   |  6 +++++-
 arch/powerpc/net/bpf_jit_comp32.c |  8 ++++++--
 arch/powerpc/net/bpf_jit_comp64.c |  8 ++++++--
 4 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 935ea95b66359e..7e9b978b768ed9 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -24,16 +24,30 @@
 #define EMIT(instr)		PLANT_INSTR(image, ctx->idx, instr)
 
 /* Long jump; (unconditional 'branch') */
-#define PPC_JMP(dest)		EMIT(PPC_INST_BRANCH |			      \
-				     (((dest) - (ctx->idx * 4)) & 0x03fffffc))
+#define PPC_JMP(dest)							      \
+	do {								      \
+		long offset = (long)(dest) - (ctx->idx * 4);		      \
+		if (!is_offset_in_branch_range(offset)) {		      \
+			pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);			\
+			return -ERANGE;					      \
+		}							      \
+		EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));		      \
+	} while (0)
+
 /* blr; (unconditional 'branch' with link) to absolute address */
 #define PPC_BL_ABS(dest)	EMIT(PPC_INST_BL |			      \
 				     (((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc))
 /* "cond" here covers BO:BI fields. */
-#define PPC_BCC_SHORT(cond, dest)	EMIT(PPC_INST_BRANCH_COND |	      \
-					     (((cond) & 0x3ff) << 16) |	      \
-					     (((dest) - (ctx->idx * 4)) &     \
-					      0xfffc))
+#define PPC_BCC_SHORT(cond, dest)					      \
+	do {								      \
+		long offset = (long)(dest) - (ctx->idx * 4);		      \
+		if (!is_offset_in_cond_branch_range(offset)) {		      \
+			pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);		\
+			return -ERANGE;					      \
+		}							      \
+		EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));					\
+	} while (0)
+
 /* Sign-extended 32-bit immediate load */
 #define PPC_LI32(d, i)		do {					      \
 		if ((int)(uintptr_t)(i) >= -32768 &&			      \
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 53aefee3fe70be..fcbf7a917c566e 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -210,7 +210,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 		/* Now build the prologue, body code & epilogue for real. */
 		cgctx.idx = 0;
 		bpf_jit_build_prologue(code_base, &cgctx);
-		bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
+		if (bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass)) {
+			bpf_jit_binary_free(bpf_hdr);
+			fp = org_fp;
+			goto out_addrs;
+		}
 		bpf_jit_build_epilogue(code_base, &cgctx);
 
 		if (bpf_jit_enable > 1)
diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index beb12cbc8c2994..a74d52204f8da2 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -200,7 +200,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
 	}
 }
 
-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
 {
 	/*
 	 * By now, the eBPF program has already setup parameters in r3-r6
@@ -261,7 +261,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
 	bpf_jit_emit_common_epilogue(image, ctx);
 
 	EMIT(PPC_RAW_BCTR());
+
 	/* out: */
+	return 0;
 }
 
 /* Assemble the body code between the prologue & epilogue */
@@ -1090,7 +1092,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		 */
 		case BPF_JMP | BPF_TAIL_CALL:
 			ctx->seen |= SEEN_TAILCALL;
-			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+			if (ret < 0)
+				return ret;
 			break;
 
 		default:
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index b87a63dba9c8fb..f06c62089b1457 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -206,7 +206,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
 	EMIT(PPC_RAW_BCTRL());
 }
 
-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
 {
 	/*
 	 * By now, the eBPF program has already setup parameters in r3, r4 and r5
@@ -267,7 +267,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
 	bpf_jit_emit_common_epilogue(image, ctx);
 
 	EMIT(PPC_RAW_BCTR());
+
 	/* out: */
+	return 0;
 }
 
 /* Assemble the body code between the prologue & epilogue */
@@ -993,7 +995,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		 */
 		case BPF_JMP | BPF_TAIL_CALL:
 			ctx->seen |= SEEN_TAILCALL;
-			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+			if (ret < 0)
+				return ret;
 			break;
 
 		default:
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
  2021-10-01 21:14 ` [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range Naveen N. Rao
  2021-10-01 21:14 ` [PATCH 2/9] powerpc/bpf: Validate branch ranges Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 21:47   ` Song Liu
                     ` (2 more replies)
  2021-10-01 21:14 ` [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT Naveen N. Rao
                   ` (6 subsequent siblings)
  9 siblings, 3 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>

SEEN_STACK is unused on PowerPC. Remove it. Also, have
SEEN_TAILCALL use 0x40000000.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/net/bpf_jit.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 7e9b978b768ed9..89bd744c2bffd4 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -125,8 +125,7 @@
 #define COND_LE		(CR0_GT | COND_CMP_FALSE)
 
 #define SEEN_FUNC	0x20000000 /* might call external helpers */
-#define SEEN_STACK	0x40000000 /* uses BPF stack */
-#define SEEN_TAILCALL	0x80000000 /* uses tail calls */
+#define SEEN_TAILCALL	0x40000000 /* uses tail calls */
 
 #define SEEN_VREG_MASK	0x1ff80000 /* Volatile registers r3-r12 */
 #define SEEN_NVREG_MASK	0x0003ffff /* Non volatile registers r14-r31 */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (2 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 21:53   ` Song Liu
                     ` (2 more replies)
  2021-10-01 21:14 ` [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1 Naveen N. Rao
                   ` (5 subsequent siblings)
  9 siblings, 3 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

In some scenarios, it is possible that the program epilogue is outside
the branch range for a BPF_EXIT instruction. Instead of rejecting such
programs, emit an indirect branch. We track the size of the bpf program
emitted after the initial run and do a second pass since BPF_EXIT can
end up emitting different number of instructions depending on the
program size.

Suggested-by: Jordan Niethe <jniethe5@gmail.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/net/bpf_jit.h        |  3 +++
 arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
 arch/powerpc/net/bpf_jit_comp32.c |  2 +-
 arch/powerpc/net/bpf_jit_comp64.c |  2 +-
 4 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 89bd744c2bffd4..4023de1698b9f5 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -126,6 +126,7 @@
 
 #define SEEN_FUNC	0x20000000 /* might call external helpers */
 #define SEEN_TAILCALL	0x40000000 /* uses tail calls */
+#define SEEN_BIG_PROG	0x80000000 /* large prog, >32MB */
 
 #define SEEN_VREG_MASK	0x1ff80000 /* Volatile registers r3-r12 */
 #define SEEN_NVREG_MASK	0x0003ffff /* Non volatile registers r14-r31 */
@@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
 void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
 void bpf_jit_realloc_regs(struct codegen_context *ctx);
+int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
+					int tmp_reg, unsigned long exit_addr);
 
 #endif
 
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index fcbf7a917c566e..3204872fbf2738 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
 	return 0;
 }
 
+int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
+					int tmp_reg, unsigned long exit_addr)
+{
+	if (!(ctx->seen & SEEN_BIG_PROG) && is_offset_in_branch_range(exit_addr)) {
+		PPC_JMP(exit_addr);
+	} else {
+		ctx->seen |= SEEN_BIG_PROG;
+		PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
+		EMIT(PPC_RAW_MTCTR(tmp_reg));
+		EMIT(PPC_RAW_BCTR());
+	}
+
+	return 0;
+}
+
 struct powerpc64_jit_data {
 	struct bpf_binary_header *header;
 	u32 *addrs;
@@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 		goto out_addrs;
 	}
 
+	if (!is_offset_in_branch_range((long)cgctx.idx * 4))
+		cgctx.seen |= SEEN_BIG_PROG;
+
 	/*
 	 * If we have seen a tail call, we need a second pass.
 	 * This is because bpf_jit_emit_common_epilogue() is called
 	 * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
+	 * We also need a second pass if we ended up with too large
+	 * a program so as to fix branches.
 	 */
-	if (cgctx.seen & SEEN_TAILCALL) {
+	if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
 		cgctx.idx = 0;
 		if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
 			fp = org_fp;
diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index a74d52204f8da2..d2a67574a23066 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			 * we'll just fall through to the epilogue.
 			 */
 			if (i != flen - 1)
-				PPC_JMP(exit_addr);
+				bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);
 			/* else fall through to the epilogue */
 			break;
 
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index f06c62089b1457..3351a866ef6207 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -761,7 +761,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			 * we'll just fall through to the epilogue.
 			 */
 			if (i != flen - 1)
-				PPC_JMP(exit_addr);
+				bpf_jit_emit_exit_insn(image, ctx, b2p[TMP_REG_1], exit_addr);
 			/* else fall through to the epilogue */
 			break;
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (3 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 21:55   ` Song Liu
  2021-10-02 17:32   ` Johan Almbladh
  2021-10-01 21:14 ` [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000 Naveen N. Rao
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Only ignore the operation if dividing by 1.

Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 3351a866ef6207..ffb7a2877a8469 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -391,8 +391,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
 			if (imm == 0)
 				return -EINVAL;
-			else if (imm == 1)
-				goto bpf_alu32_trunc;
+			if (imm == 1) {
+				if (BPF_OP(code) == BPF_DIV) {
+					goto bpf_alu32_trunc;
+				} else {
+					EMIT(PPC_RAW_LI(dst_reg, 0));
+					break;
+				}
+			}
 
 			PPC_LI32(b2p[TMP_REG_1], imm);
 			switch (BPF_CLASS(code)) {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (4 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1 Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 22:01   ` Song Liu
                     ` (2 more replies)
  2021-10-01 21:14 ` [PATCH 7/9] powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06 Naveen N. Rao
                   ` (3 subsequent siblings)
  9 siblings, 3 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

We aren't handling subtraction involving an immediate value of
0x80000000 properly. Fix the same.

Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index ffb7a2877a8469..4641a50e82d50d 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
 		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
 		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
-			if (BPF_OP(code) == BPF_SUB)
-				imm = -imm;
-			if (imm) {
-				if (imm >= -32768 && imm < 32768)
-					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
-				else {
-					PPC_LI32(b2p[TMP_REG_1], imm);
+			if (imm > -32768 && imm < 32768) {
+				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
+					BPF_OP(code) == BPF_SUB ? IMM_L(-imm) : IMM_L(imm)));
+			} else {
+				PPC_LI32(b2p[TMP_REG_1], imm);
+				if (BPF_OP(code) == BPF_SUB)
+					EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
+				else
 					EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
-				}
 			}
 			goto bpf_alu32_trunc;
 		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 7/9] powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (5 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000 Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-02 17:35   ` Johan Almbladh
  2021-10-01 21:14 ` [PATCH 8/9] powerpc/security: Add a helper to query stf_barrier type Naveen N. Rao
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Johan reported the below crash with test_bpf on ppc64 e5500:

  test_bpf: #296 ALU_END_FROM_LE 64: 0x0123456789abcdef -> 0x67452301 jited:1
  Oops: Exception in kernel mode, sig: 4 [#1]
  BE PAGE_SIZE=4K SMP NR_CPUS=24 QEMU e500
  Modules linked in: test_bpf(+)
  CPU: 0 PID: 76 Comm: insmod Not tainted 5.14.0-03771-g98c2059e008a-dirty #1
  NIP:  8000000000061c3c LR: 80000000006dea64 CTR: 8000000000061c18
  REGS: c0000000032d3420 TRAP: 0700   Not tainted (5.14.0-03771-g98c2059e008a-dirty)
  MSR:  0000000080089000 <EE,ME>  CR: 88002822  XER: 20000000 IRQMASK: 0
  <...>
  NIP [8000000000061c3c] 0x8000000000061c3c
  LR [80000000006dea64] .__run_one+0x104/0x17c [test_bpf]
  Call Trace:
   .__run_one+0x60/0x17c [test_bpf] (unreliable)
   .test_bpf_init+0x6a8/0xdc8 [test_bpf]
   .do_one_initcall+0x6c/0x28c
   .do_init_module+0x68/0x28c
   .load_module+0x2460/0x2abc
   .__do_sys_init_module+0x120/0x18c
   .system_call_exception+0x110/0x1b8
   system_call_common+0xf0/0x210
  --- interrupt: c00 at 0x101d0acc
  <...>
  ---[ end trace 47b2bf19090bb3d0 ]---

  Illegal instruction

The illegal instruction turned out to be 'ldbrx' emitted for
BPF_FROM_[L|B]E, which was only introduced in ISA v2.06. Guard use of
the same and implement an alternative approach for older processors.

Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Reported-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ppc-opcode.h |  1 +
 arch/powerpc/net/bpf_jit_comp64.c     | 22 +++++++++++++---------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index baea657bc8687e..bca31a61e57f88 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -498,6 +498,7 @@
 #define PPC_RAW_LDX(r, base, b)		(0x7c00002a | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
 #define PPC_RAW_LHZ(r, base, i)		(0xa0000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
 #define PPC_RAW_LHBRX(r, base, b)	(0x7c00062c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
+#define PPC_RAW_LWBRX(r, base, b)	(0x7c00042c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
 #define PPC_RAW_LDBRX(r, base, b)	(0x7c000428 | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
 #define PPC_RAW_STWCX(s, a, b)		(0x7c00012d | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
 #define PPC_RAW_CMPWI(a, i)		(0x2c000000 | ___PPC_RA(a) | IMM_L(i))
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 4641a50e82d50d..097d1ecfb9f562 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -601,17 +601,21 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_MR(dst_reg, b2p[TMP_REG_1]));
 				break;
 			case 64:
-				/*
-				 * Way easier and faster(?) to store the value
-				 * into stack and then use ldbrx
-				 *
-				 * ctx->seen will be reliable in pass2, but
-				 * the instructions generated will remain the
-				 * same across all passes
-				 */
+				/* Store the value to stack and then use byte-reverse loads */
 				PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
 				EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx)));
-				EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
+				if (cpu_has_feature(CPU_FTR_ARCH_206)) {
+					EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
+				} else {
+					EMIT(PPC_RAW_LWBRX(dst_reg, 0, b2p[TMP_REG_1]));
+					if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
+						EMIT(PPC_RAW_SLDI(dst_reg, dst_reg, 32));
+					EMIT(PPC_RAW_LI(b2p[TMP_REG_2], 4));
+					EMIT(PPC_RAW_LWBRX(b2p[TMP_REG_2], b2p[TMP_REG_2], b2p[TMP_REG_1]));
+					if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+						EMIT(PPC_RAW_SLDI(b2p[TMP_REG_2], b2p[TMP_REG_2], 32));
+					EMIT(PPC_RAW_OR(dst_reg, dst_reg, b2p[TMP_REG_2]));
+				}
 				break;
 			}
 			break;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 8/9] powerpc/security: Add a helper to query stf_barrier type
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (6 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 7/9] powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06 Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-01 21:14 ` [PATCH 9/9] powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC Naveen N. Rao
  2021-10-02 17:41 ` [PATCH 0/9] powerpc/bpf: Various fixes Johan Almbladh
  9 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Add a helper to return the stf_barrier type for the current processor.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/security_features.h | 5 +++++
 arch/powerpc/kernel/security.c               | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index 792eefaf230b80..27574f218b371f 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -39,6 +39,11 @@ static inline bool security_ftr_enabled(u64 feature)
 	return !!(powerpc_security_features & feature);
 }
 
+#ifdef CONFIG_PPC_BOOK3S_64
+enum stf_barrier_type stf_barrier_type_get(void);
+#else
+static inline enum stf_barrier_type stf_barrier_type_get(void) { return STF_BARRIER_NONE; }
+#endif
 
 // Features indicating support for Spectre/Meltdown mitigations
 
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 1a998490fe60f0..15fb5ea1b9eafa 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -263,6 +263,11 @@ static int __init handle_no_stf_barrier(char *p)
 
 early_param("no_stf_barrier", handle_no_stf_barrier);
 
+enum stf_barrier_type stf_barrier_type_get(void)
+{
+	return stf_enabled_flush_types;
+}
+
 /* This is the generic flag used by other architectures */
 static int __init handle_ssbd(char *p)
 {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 9/9] powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (7 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 8/9] powerpc/security: Add a helper to query stf_barrier type Naveen N. Rao
@ 2021-10-01 21:14 ` Naveen N. Rao
  2021-10-02 17:41 ` [PATCH 0/9] powerpc/bpf: Various fixes Johan Almbladh
  9 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-01 21:14 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh
  Cc: bpf, linuxppc-dev

Emit similar instruction sequences to commit a048a07d7f4535
("powerpc/64s: Add support for a store forwarding barrier at kernel
entry/exit") when encountering BPF_NOSPEC.

Mitigations are enabled depending on what the firmware advertises. In
particular, we do not gate these mitigations based on current settings,
just like in x86. Due to this, we don't need to take any action if
mitigations are enabled or disabled at runtime.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
Thanks to Daniel Borkmann and Nick Piggin for their help in putting 
together this patch!


 arch/powerpc/net/bpf_jit64.h      |  8 ++---
 arch/powerpc/net/bpf_jit_comp64.c | 55 ++++++++++++++++++++++++++++---
 2 files changed, 55 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
index 7b713edfa7e261..b63b35e45e558c 100644
--- a/arch/powerpc/net/bpf_jit64.h
+++ b/arch/powerpc/net/bpf_jit64.h
@@ -16,18 +16,18 @@
  * with our redzone usage.
  *
  *		[	prev sp		] <-------------
- *		[   nv gpr save area	] 6*8		|
+ *		[   nv gpr save area	] 5*8		|
  *		[    tail_call_cnt	] 8		|
- *		[    local_tmp_var	] 8		|
+ *		[    local_tmp_var	] 16		|
  * fp (r31) -->	[   ebpf stack space	] upto 512	|
  *		[     frame header	] 32/112	|
  * sp (r1) --->	[    stack pointer	] --------------
  */
 
 /* for gpr non volatile registers BPG_REG_6 to 10 */
-#define BPF_PPC_STACK_SAVE	(6*8)
+#define BPF_PPC_STACK_SAVE	(5*8)
 /* for bpf JIT code internal usage */
-#define BPF_PPC_STACK_LOCALS	16
+#define BPF_PPC_STACK_LOCALS	24
 /* stack frame excluding BPF stack, ensure this is quadword aligned */
 #define BPF_PPC_STACKFRAME	(STACK_FRAME_MIN_SIZE + \
 				 BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 097d1ecfb9f562..df1666f0b54d9b 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -15,6 +15,7 @@
 #include <linux/if_vlan.h>
 #include <asm/kprobes.h>
 #include <linux/bpf.h>
+#include <asm/security_features.h>
 
 #include "bpf_jit64.h"
 
@@ -35,9 +36,9 @@ static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
  *		[	prev sp		] <-------------
  *		[	  ...       	] 		|
  * sp (r1) --->	[    stack pointer	] --------------
- *		[   nv gpr save area	] 6*8
+ *		[   nv gpr save area	] 5*8
  *		[    tail_call_cnt	] 8
- *		[    local_tmp_var	] 8
+ *		[    local_tmp_var	] 16
  *		[   unused red zone	] 208 bytes protected
  */
 static int bpf_jit_stack_local(struct codegen_context *ctx)
@@ -45,12 +46,12 @@ static int bpf_jit_stack_local(struct codegen_context *ctx)
 	if (bpf_has_stack_frame(ctx))
 		return STACK_FRAME_MIN_SIZE + ctx->stack_size;
 	else
-		return -(BPF_PPC_STACK_SAVE + 16);
+		return -(BPF_PPC_STACK_SAVE + 24);
 }
 
 static int bpf_jit_stack_tailcallcnt(struct codegen_context *ctx)
 {
-	return bpf_jit_stack_local(ctx) + 8;
+	return bpf_jit_stack_local(ctx) + 16;
 }
 
 static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg)
@@ -272,10 +273,33 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o
 	return 0;
 }
 
+/*
+ * We spill into the redzone always, even if the bpf program has its own stackframe.
+ * Offsets hardcoded based on BPF_PPC_STACK_SAVE -- see bpf_jit_stack_local()
+ */
+void bpf_stf_barrier(void);
+
+asm (
+"		.global bpf_stf_barrier		;"
+"	bpf_stf_barrier:			;"
+"		std	21,-64(1)		;"
+"		std	22,-56(1)		;"
+"		sync				;"
+"		ld	21,-64(1)		;"
+"		ld	22,-56(1)		;"
+"		ori	31,31,0			;"
+"		.rept 14			;"
+"		b	1f			;"
+"	1:					;"
+"		.endr				;"
+"		blr				;"
+);
+
 /* Assemble the body code between the prologue & epilogue */
 int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *ctx,
 		       u32 *addrs, bool extra_pass)
 {
+	enum stf_barrier_type stf_barrier = stf_barrier_type_get();
 	const struct bpf_insn *insn = fp->insnsi;
 	int flen = fp->len;
 	int i, ret;
@@ -643,6 +667,29 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		 * BPF_ST NOSPEC (speculation barrier)
 		 */
 		case BPF_ST | BPF_NOSPEC:
+			if (!security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) ||
+					!security_ftr_enabled(SEC_FTR_STF_BARRIER))
+				break;
+
+			switch (stf_barrier) {
+			case STF_BARRIER_EIEIO:
+				EMIT(PPC_RAW_EIEIO() | 0x02000000);
+				break;
+			case STF_BARRIER_SYNC_ORI:
+				EMIT(PPC_RAW_SYNC());
+				EMIT(PPC_RAW_LD(b2p[TMP_REG_1], _R13, 0));
+				EMIT(PPC_RAW_ORI(_R31, _R31, 0));
+				break;
+			case STF_BARRIER_FALLBACK:
+				EMIT(PPC_RAW_MFLR(b2p[TMP_REG_1]));
+				PPC_LI64(12, dereference_kernel_function_descriptor(bpf_stf_barrier));
+				EMIT(PPC_RAW_MTCTR(12));
+				EMIT(PPC_RAW_BCTRL());
+				EMIT(PPC_RAW_MTLR(b2p[TMP_REG_1]));
+				break;
+			case STF_BARRIER_NONE:
+				break;
+			}
 			break;
 
 		/*
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range
  2021-10-01 21:14 ` [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range Naveen N. Rao
@ 2021-10-01 21:37   ` Song Liu
  2021-10-04 18:02     ` Naveen N. Rao
  2021-10-03  7:50   ` Christophe Leroy
  1 sibling, 1 reply; 40+ messages in thread
From: Song Liu @ 2021-10-01 21:37 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh, bpf,
	linuxppc-dev

On Fri, Oct 1, 2021 at 2:16 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Add a helper to check if a given offset is within the branch range for a
> powerpc conditional branch instruction, and update some sites to use the
> new helper.
>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Song Liu <songliubraving@fb.com>

With one nitpick:

> ---
>  arch/powerpc/include/asm/code-patching.h | 1 +
>  arch/powerpc/lib/code-patching.c         | 7 ++++++-
>  arch/powerpc/net/bpf_jit.h               | 7 +------
>  3 files changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index a95f63788c6b14..4ba834599c4d4c 100644
> --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -23,6 +23,7 @@
>  #define BRANCH_ABSOLUTE        0x2
>
>  bool is_offset_in_branch_range(long offset);
> +bool is_offset_in_cond_branch_range(long offset);
>  int create_branch(struct ppc_inst *instr, const u32 *addr,
>                   unsigned long target, int flags);
>  int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index f9a3019e37b43c..e2342b9a1ab9c9 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -228,6 +228,11 @@ bool is_offset_in_branch_range(long offset)
>         return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
>  }
>
> +bool is_offset_in_cond_branch_range(long offset)
> +{
> +       return offset >= -0x8000 && offset <= 0x7FFF && !(offset & 0x3);
> +}

Why not inline this one?

> +
>  /*
>   * Helper to check if a given instruction is a conditional branch
>   * Derived from the conditional checks in analyse_instr()
> @@ -280,7 +285,7 @@ int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
>                 offset = offset - (unsigned long)addr;
>
>         /* Check we can represent the target in the instruction format */
> -       if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
> +       if (!is_offset_in_cond_branch_range(offset))
>                 return 1;
>
>         /* Mask out the flags and target, so they don't step on each other. */
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 99fad093f43ec1..935ea95b66359e 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -78,11 +78,6 @@
>  #define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
>  #endif
>
> -static inline bool is_nearbranch(int offset)
> -{
> -       return (offset < 32768) && (offset >= -32768);
> -}
> -
>  /*
>   * The fly in the ointment of code size changing from pass to pass is
>   * avoided by padding the short branch case with a NOP.         If code size differs
> @@ -91,7 +86,7 @@ static inline bool is_nearbranch(int offset)
>   * state.
>   */
>  #define PPC_BCC(cond, dest)    do {                                          \
> -               if (is_nearbranch((dest) - (ctx->idx * 4))) {                 \
> +               if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) {    \
>                         PPC_BCC_SHORT(cond, dest);                            \
>                         EMIT(PPC_RAW_NOP());                                  \
>                 } else {                                                      \
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/9] powerpc/bpf: Validate branch ranges
  2021-10-01 21:14 ` [PATCH 2/9] powerpc/bpf: Validate branch ranges Naveen N. Rao
@ 2021-10-01 21:45   ` Song Liu
  2021-10-02 17:29   ` Johan Almbladh
  2021-10-03  7:54   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Song Liu @ 2021-10-01 21:45 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh, bpf,
	linuxppc-dev

On Fri, Oct 1, 2021 at 2:16 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Add checks to ensure that we never emit branch instructions with
> truncated branch offsets.
>
> Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
>  arch/powerpc/net/bpf_jit.h        | 26 ++++++++++++++++++++------
>  arch/powerpc/net/bpf_jit_comp.c   |  6 +++++-
>  arch/powerpc/net/bpf_jit_comp32.c |  8 ++++++--
>  arch/powerpc/net/bpf_jit_comp64.c |  8 ++++++--
>  4 files changed, 37 insertions(+), 11 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 935ea95b66359e..7e9b978b768ed9 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -24,16 +24,30 @@
>  #define EMIT(instr)            PLANT_INSTR(image, ctx->idx, instr)
>
>  /* Long jump; (unconditional 'branch') */
> -#define PPC_JMP(dest)          EMIT(PPC_INST_BRANCH |                        \
> -                                    (((dest) - (ctx->idx * 4)) & 0x03fffffc))
> +#define PPC_JMP(dest)                                                        \
> +       do {                                                                  \
> +               long offset = (long)(dest) - (ctx->idx * 4);                  \
> +               if (!is_offset_in_branch_range(offset)) {                     \
> +                       pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);                       \
> +                       return -ERANGE;                                       \
> +               }                                                             \
> +               EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));                \
> +       } while (0)
> +
>  /* blr; (unconditional 'branch' with link) to absolute address */
>  #define PPC_BL_ABS(dest)       EMIT(PPC_INST_BL |                            \
>                                      (((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc))
>  /* "cond" here covers BO:BI fields. */
> -#define PPC_BCC_SHORT(cond, dest)      EMIT(PPC_INST_BRANCH_COND |           \
> -                                            (((cond) & 0x3ff) << 16) |       \
> -                                            (((dest) - (ctx->idx * 4)) &     \
> -                                             0xfffc))
> +#define PPC_BCC_SHORT(cond, dest)                                            \
> +       do {                                                                  \
> +               long offset = (long)(dest) - (ctx->idx * 4);                  \
> +               if (!is_offset_in_cond_branch_range(offset)) {                \
> +                       pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);           \
> +                       return -ERANGE;                                       \
> +               }                                                             \
> +               EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));                                      \
> +       } while (0)
> +
>  /* Sign-extended 32-bit immediate load */
>  #define PPC_LI32(d, i)         do {                                          \
>                 if ((int)(uintptr_t)(i) >= -32768 &&                          \
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 53aefee3fe70be..fcbf7a917c566e 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -210,7 +210,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>                 /* Now build the prologue, body code & epilogue for real. */
>                 cgctx.idx = 0;
>                 bpf_jit_build_prologue(code_base, &cgctx);
> -               bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
> +               if (bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass)) {
> +                       bpf_jit_binary_free(bpf_hdr);
> +                       fp = org_fp;
> +                       goto out_addrs;
> +               }
>                 bpf_jit_build_epilogue(code_base, &cgctx);
>
>                 if (bpf_jit_enable > 1)
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index beb12cbc8c2994..a74d52204f8da2 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -200,7 +200,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
>         }
>  }
>
> -static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
> +static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
>  {
>         /*
>          * By now, the eBPF program has already setup parameters in r3-r6
> @@ -261,7 +261,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
>         bpf_jit_emit_common_epilogue(image, ctx);
>
>         EMIT(PPC_RAW_BCTR());
> +
>         /* out: */
> +       return 0;
>  }
>
>  /* Assemble the body code between the prologue & epilogue */
> @@ -1090,7 +1092,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                  */
>                 case BPF_JMP | BPF_TAIL_CALL:
>                         ctx->seen |= SEEN_TAILCALL;
> -                       bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       if (ret < 0)
> +                               return ret;
>                         break;
>
>                 default:
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index b87a63dba9c8fb..f06c62089b1457 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -206,7 +206,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
>         EMIT(PPC_RAW_BCTRL());
>  }
>
> -static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
> +static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
>  {
>         /*
>          * By now, the eBPF program has already setup parameters in r3, r4 and r5
> @@ -267,7 +267,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
>         bpf_jit_emit_common_epilogue(image, ctx);
>
>         EMIT(PPC_RAW_BCTR());
> +
>         /* out: */
> +       return 0;
>  }
>
>  /* Assemble the body code between the prologue & epilogue */
> @@ -993,7 +995,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                  */
>                 case BPF_JMP | BPF_TAIL_CALL:
>                         ctx->seen |= SEEN_TAILCALL;
> -                       bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       if (ret < 0)
> +                               return ret;
>                         break;
>
>                 default:
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-01 21:14 ` [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK Naveen N. Rao
@ 2021-10-01 21:47   ` Song Liu
  2021-10-02 17:30   ` Johan Almbladh
  2021-10-03  7:55   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Song Liu @ 2021-10-01 21:47 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh, bpf,
	linuxppc-dev

On Fri, Oct 1, 2021 at 2:16 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>
> SEEN_STACK is unused on PowerPC. Remove it. Also, have
> SEEN_TAILCALL use 0x40000000.
>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
>  arch/powerpc/net/bpf_jit.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 7e9b978b768ed9..89bd744c2bffd4 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -125,8 +125,7 @@
>  #define COND_LE                (CR0_GT | COND_CMP_FALSE)
>
>  #define SEEN_FUNC      0x20000000 /* might call external helpers */
> -#define SEEN_STACK     0x40000000 /* uses BPF stack */
> -#define SEEN_TAILCALL  0x80000000 /* uses tail calls */
> +#define SEEN_TAILCALL  0x40000000 /* uses tail calls */
>
>  #define SEEN_VREG_MASK 0x1ff80000 /* Volatile registers r3-r12 */
>  #define SEEN_NVREG_MASK        0x0003ffff /* Non volatile registers r14-r31 */
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-01 21:14 ` [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT Naveen N. Rao
@ 2021-10-01 21:53   ` Song Liu
  2021-10-02 17:31   ` Johan Almbladh
  2021-10-03  7:59   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Song Liu @ 2021-10-01 21:53 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh, bpf,
	linuxppc-dev

On Fri, Oct 1, 2021 at 2:17 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> In some scenarios, it is possible that the program epilogue is outside
> the branch range for a BPF_EXIT instruction. Instead of rejecting such
> programs, emit an indirect branch. We track the size of the bpf program
> emitted after the initial run and do a second pass since BPF_EXIT can
> end up emitting different number of instructions depending on the
> program size.
>
> Suggested-by: Jordan Niethe <jniethe5@gmail.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
>  arch/powerpc/net/bpf_jit.h        |  3 +++
>  arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
>  arch/powerpc/net/bpf_jit_comp32.c |  2 +-
>  arch/powerpc/net/bpf_jit_comp64.c |  2 +-
>  4 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 89bd744c2bffd4..4023de1698b9f5 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -126,6 +126,7 @@
>
>  #define SEEN_FUNC      0x20000000 /* might call external helpers */
>  #define SEEN_TAILCALL  0x40000000 /* uses tail calls */
> +#define SEEN_BIG_PROG  0x80000000 /* large prog, >32MB */
>
>  #define SEEN_VREG_MASK 0x1ff80000 /* Volatile registers r3-r12 */
>  #define SEEN_NVREG_MASK        0x0003ffff /* Non volatile registers r14-r31 */
> @@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>  void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
>  void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
>  void bpf_jit_realloc_regs(struct codegen_context *ctx);
> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
> +                                       int tmp_reg, unsigned long exit_addr);
>
>  #endif
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index fcbf7a917c566e..3204872fbf2738 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
>         return 0;
>  }
>
> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
> +                                       int tmp_reg, unsigned long exit_addr)
> +{
> +       if (!(ctx->seen & SEEN_BIG_PROG) && is_offset_in_branch_range(exit_addr)) {
> +               PPC_JMP(exit_addr);
> +       } else {
> +               ctx->seen |= SEEN_BIG_PROG;
> +               PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
> +               EMIT(PPC_RAW_MTCTR(tmp_reg));
> +               EMIT(PPC_RAW_BCTR());
> +       }
> +
> +       return 0;
> +}
> +
>  struct powerpc64_jit_data {
>         struct bpf_binary_header *header;
>         u32 *addrs;
> @@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>                 goto out_addrs;
>         }
>
> +       if (!is_offset_in_branch_range((long)cgctx.idx * 4))
> +               cgctx.seen |= SEEN_BIG_PROG;
> +
>         /*
>          * If we have seen a tail call, we need a second pass.
>          * This is because bpf_jit_emit_common_epilogue() is called
>          * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
> +        * We also need a second pass if we ended up with too large
> +        * a program so as to fix branches.
>          */
> -       if (cgctx.seen & SEEN_TAILCALL) {
> +       if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
>                 cgctx.idx = 0;
>                 if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
>                         fp = org_fp;
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index a74d52204f8da2..d2a67574a23066 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                          * we'll just fall through to the epilogue.
>                          */
>                         if (i != flen - 1)
> -                               PPC_JMP(exit_addr);
> +                               bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);
>                         /* else fall through to the epilogue */
>                         break;
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index f06c62089b1457..3351a866ef6207 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -761,7 +761,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                          * we'll just fall through to the epilogue.
>                          */
>                         if (i != flen - 1)
> -                               PPC_JMP(exit_addr);
> +                               bpf_jit_emit_exit_insn(image, ctx, b2p[TMP_REG_1], exit_addr);
>                         /* else fall through to the epilogue */
>                         break;
>
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1
  2021-10-01 21:14 ` [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1 Naveen N. Rao
@ 2021-10-01 21:55   ` Song Liu
  2021-10-02 17:32   ` Johan Almbladh
  1 sibling, 0 replies; 40+ messages in thread
From: Song Liu @ 2021-10-01 21:55 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh, bpf,
	linuxppc-dev

On Fri, Oct 1, 2021 at 2:16 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Only ignore the operation if dividing by 1.
>
> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
>  arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 3351a866ef6207..ffb7a2877a8469 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -391,8 +391,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                 case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
>                         if (imm == 0)
>                                 return -EINVAL;
> -                       else if (imm == 1)
> -                               goto bpf_alu32_trunc;
> +                       if (imm == 1) {
> +                               if (BPF_OP(code) == BPF_DIV) {
> +                                       goto bpf_alu32_trunc;
> +                               } else {
> +                                       EMIT(PPC_RAW_LI(dst_reg, 0));
> +                                       break;
> +                               }
> +                       }
>
>                         PPC_LI32(b2p[TMP_REG_1], imm);
>                         switch (BPF_CLASS(code)) {
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  2021-10-01 21:14 ` [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000 Naveen N. Rao
@ 2021-10-01 22:01   ` Song Liu
  2021-10-02 17:33   ` Johan Almbladh
  2021-10-03  8:07   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Song Liu @ 2021-10-01 22:01 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, Johan Almbladh, bpf,
	linuxppc-dev

On Fri, Oct 1, 2021 at 2:17 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> We aren't handling subtraction involving an immediate value of
> 0x80000000 properly. Fix the same.
>
> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
>  arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index ffb7a2877a8469..4641a50e82d50d 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                 case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
>                 case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
>                 case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
> -                       if (BPF_OP(code) == BPF_SUB)
> -                               imm = -imm;
> -                       if (imm) {
> -                               if (imm >= -32768 && imm < 32768)
> -                                       EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
> -                               else {
> -                                       PPC_LI32(b2p[TMP_REG_1], imm);
> +                       if (imm > -32768 && imm < 32768) {
> +                               EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
> +                                       BPF_OP(code) == BPF_SUB ? IMM_L(-imm) : IMM_L(imm)));
> +                       } else {
> +                               PPC_LI32(b2p[TMP_REG_1], imm);
> +                               if (BPF_OP(code) == BPF_SUB)
> +                                       EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
> +                               else
>                                         EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
> -                               }
>                         }
>                         goto bpf_alu32_trunc;
>                 case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/9] powerpc/bpf: Validate branch ranges
  2021-10-01 21:14 ` [PATCH 2/9] powerpc/bpf: Validate branch ranges Naveen N. Rao
  2021-10-01 21:45   ` Song Liu
@ 2021-10-02 17:29   ` Johan Almbladh
  2021-10-03  7:54   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:29 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Add checks to ensure that we never emit branch instructions with
> truncated branch offsets.
>
> Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---
>  arch/powerpc/net/bpf_jit.h        | 26 ++++++++++++++++++++------
>  arch/powerpc/net/bpf_jit_comp.c   |  6 +++++-
>  arch/powerpc/net/bpf_jit_comp32.c |  8 ++++++--
>  arch/powerpc/net/bpf_jit_comp64.c |  8 ++++++--
>  4 files changed, 37 insertions(+), 11 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 935ea95b66359e..7e9b978b768ed9 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -24,16 +24,30 @@
>  #define EMIT(instr)            PLANT_INSTR(image, ctx->idx, instr)
>
>  /* Long jump; (unconditional 'branch') */
> -#define PPC_JMP(dest)          EMIT(PPC_INST_BRANCH |                        \
> -                                    (((dest) - (ctx->idx * 4)) & 0x03fffffc))
> +#define PPC_JMP(dest)                                                        \
> +       do {                                                                  \
> +               long offset = (long)(dest) - (ctx->idx * 4);                  \
> +               if (!is_offset_in_branch_range(offset)) {                     \
> +                       pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);                       \
> +                       return -ERANGE;                                       \
> +               }                                                             \
> +               EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));                \
> +       } while (0)
> +
>  /* blr; (unconditional 'branch' with link) to absolute address */
>  #define PPC_BL_ABS(dest)       EMIT(PPC_INST_BL |                            \
>                                      (((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc))
>  /* "cond" here covers BO:BI fields. */
> -#define PPC_BCC_SHORT(cond, dest)      EMIT(PPC_INST_BRANCH_COND |           \
> -                                            (((cond) & 0x3ff) << 16) |       \
> -                                            (((dest) - (ctx->idx * 4)) &     \
> -                                             0xfffc))
> +#define PPC_BCC_SHORT(cond, dest)                                            \
> +       do {                                                                  \
> +               long offset = (long)(dest) - (ctx->idx * 4);                  \
> +               if (!is_offset_in_cond_branch_range(offset)) {                \
> +                       pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);           \
> +                       return -ERANGE;                                       \
> +               }                                                             \
> +               EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));                                      \
> +       } while (0)
> +
>  /* Sign-extended 32-bit immediate load */
>  #define PPC_LI32(d, i)         do {                                          \
>                 if ((int)(uintptr_t)(i) >= -32768 &&                          \
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 53aefee3fe70be..fcbf7a917c566e 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -210,7 +210,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>                 /* Now build the prologue, body code & epilogue for real. */
>                 cgctx.idx = 0;
>                 bpf_jit_build_prologue(code_base, &cgctx);
> -               bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
> +               if (bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass)) {
> +                       bpf_jit_binary_free(bpf_hdr);
> +                       fp = org_fp;
> +                       goto out_addrs;
> +               }
>                 bpf_jit_build_epilogue(code_base, &cgctx);
>
>                 if (bpf_jit_enable > 1)
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index beb12cbc8c2994..a74d52204f8da2 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -200,7 +200,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
>         }
>  }
>
> -static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
> +static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
>  {
>         /*
>          * By now, the eBPF program has already setup parameters in r3-r6
> @@ -261,7 +261,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
>         bpf_jit_emit_common_epilogue(image, ctx);
>
>         EMIT(PPC_RAW_BCTR());
> +
>         /* out: */
> +       return 0;
>  }
>
>  /* Assemble the body code between the prologue & epilogue */
> @@ -1090,7 +1092,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                  */
>                 case BPF_JMP | BPF_TAIL_CALL:
>                         ctx->seen |= SEEN_TAILCALL;
> -                       bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       if (ret < 0)
> +                               return ret;
>                         break;
>
>                 default:
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index b87a63dba9c8fb..f06c62089b1457 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -206,7 +206,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
>         EMIT(PPC_RAW_BCTRL());
>  }
>
> -static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
> +static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
>  {
>         /*
>          * By now, the eBPF program has already setup parameters in r3, r4 and r5
> @@ -267,7 +267,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
>         bpf_jit_emit_common_epilogue(image, ctx);
>
>         EMIT(PPC_RAW_BCTR());
> +
>         /* out: */
> +       return 0;
>  }
>
>  /* Assemble the body code between the prologue & epilogue */
> @@ -993,7 +995,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                  */
>                 case BPF_JMP | BPF_TAIL_CALL:
>                         ctx->seen |= SEEN_TAILCALL;
> -                       bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +                       if (ret < 0)
> +                               return ret;
>                         break;
>
>                 default:
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-01 21:14 ` [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK Naveen N. Rao
  2021-10-01 21:47   ` Song Liu
@ 2021-10-02 17:30   ` Johan Almbladh
  2021-10-03  7:55   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:30 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>
> SEEN_STACK is unused on PowerPC. Remove it. Also, have
> SEEN_TAILCALL use 0x40000000.
>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>

Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---
>  arch/powerpc/net/bpf_jit.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 7e9b978b768ed9..89bd744c2bffd4 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -125,8 +125,7 @@
>  #define COND_LE                (CR0_GT | COND_CMP_FALSE)
>
>  #define SEEN_FUNC      0x20000000 /* might call external helpers */
> -#define SEEN_STACK     0x40000000 /* uses BPF stack */
> -#define SEEN_TAILCALL  0x80000000 /* uses tail calls */
> +#define SEEN_TAILCALL  0x40000000 /* uses tail calls */
>
>  #define SEEN_VREG_MASK 0x1ff80000 /* Volatile registers r3-r12 */
>  #define SEEN_NVREG_MASK        0x0003ffff /* Non volatile registers r14-r31 */
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-01 21:14 ` [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT Naveen N. Rao
  2021-10-01 21:53   ` Song Liu
@ 2021-10-02 17:31   ` Johan Almbladh
  2021-10-03  7:59   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:31 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> In some scenarios, it is possible that the program epilogue is outside
> the branch range for a BPF_EXIT instruction. Instead of rejecting such
> programs, emit an indirect branch. We track the size of the bpf program
> emitted after the initial run and do a second pass since BPF_EXIT can
> end up emitting different number of instructions depending on the
> program size.
>
> Suggested-by: Jordan Niethe <jniethe5@gmail.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---
>  arch/powerpc/net/bpf_jit.h        |  3 +++
>  arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
>  arch/powerpc/net/bpf_jit_comp32.c |  2 +-
>  arch/powerpc/net/bpf_jit_comp64.c |  2 +-
>  4 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 89bd744c2bffd4..4023de1698b9f5 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -126,6 +126,7 @@
>
>  #define SEEN_FUNC      0x20000000 /* might call external helpers */
>  #define SEEN_TAILCALL  0x40000000 /* uses tail calls */
> +#define SEEN_BIG_PROG  0x80000000 /* large prog, >32MB */
>
>  #define SEEN_VREG_MASK 0x1ff80000 /* Volatile registers r3-r12 */
>  #define SEEN_NVREG_MASK        0x0003ffff /* Non volatile registers r14-r31 */
> @@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>  void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
>  void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
>  void bpf_jit_realloc_regs(struct codegen_context *ctx);
> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
> +                                       int tmp_reg, unsigned long exit_addr);
>
>  #endif
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index fcbf7a917c566e..3204872fbf2738 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
>         return 0;
>  }
>
> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
> +                                       int tmp_reg, unsigned long exit_addr)
> +{
> +       if (!(ctx->seen & SEEN_BIG_PROG) && is_offset_in_branch_range(exit_addr)) {
> +               PPC_JMP(exit_addr);
> +       } else {
> +               ctx->seen |= SEEN_BIG_PROG;
> +               PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
> +               EMIT(PPC_RAW_MTCTR(tmp_reg));
> +               EMIT(PPC_RAW_BCTR());
> +       }
> +
> +       return 0;
> +}
> +
>  struct powerpc64_jit_data {
>         struct bpf_binary_header *header;
>         u32 *addrs;
> @@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>                 goto out_addrs;
>         }
>
> +       if (!is_offset_in_branch_range((long)cgctx.idx * 4))
> +               cgctx.seen |= SEEN_BIG_PROG;
> +
>         /*
>          * If we have seen a tail call, we need a second pass.
>          * This is because bpf_jit_emit_common_epilogue() is called
>          * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
> +        * We also need a second pass if we ended up with too large
> +        * a program so as to fix branches.
>          */
> -       if (cgctx.seen & SEEN_TAILCALL) {
> +       if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
>                 cgctx.idx = 0;
>                 if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
>                         fp = org_fp;
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index a74d52204f8da2..d2a67574a23066 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                          * we'll just fall through to the epilogue.
>                          */
>                         if (i != flen - 1)
> -                               PPC_JMP(exit_addr);
> +                               bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);
>                         /* else fall through to the epilogue */
>                         break;
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index f06c62089b1457..3351a866ef6207 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -761,7 +761,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                          * we'll just fall through to the epilogue.
>                          */
>                         if (i != flen - 1)
> -                               PPC_JMP(exit_addr);
> +                               bpf_jit_emit_exit_insn(image, ctx, b2p[TMP_REG_1], exit_addr);
>                         /* else fall through to the epilogue */
>                         break;
>
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1
  2021-10-01 21:14 ` [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1 Naveen N. Rao
  2021-10-01 21:55   ` Song Liu
@ 2021-10-02 17:32   ` Johan Almbladh
  1 sibling, 0 replies; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:32 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Only ignore the operation if dividing by 1.
>
> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---
>  arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 3351a866ef6207..ffb7a2877a8469 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -391,8 +391,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                 case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
>                         if (imm == 0)
>                                 return -EINVAL;
> -                       else if (imm == 1)
> -                               goto bpf_alu32_trunc;
> +                       if (imm == 1) {
> +                               if (BPF_OP(code) == BPF_DIV) {
> +                                       goto bpf_alu32_trunc;
> +                               } else {
> +                                       EMIT(PPC_RAW_LI(dst_reg, 0));
> +                                       break;
> +                               }
> +                       }
>
>                         PPC_LI32(b2p[TMP_REG_1], imm);
>                         switch (BPF_CLASS(code)) {
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  2021-10-01 21:14 ` [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000 Naveen N. Rao
  2021-10-01 22:01   ` Song Liu
@ 2021-10-02 17:33   ` Johan Almbladh
  2021-10-03  8:07   ` Christophe Leroy
  2 siblings, 0 replies; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:33 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> We aren't handling subtraction involving an immediate value of
> 0x80000000 properly. Fix the same.
>
> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---
>  arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index ffb7a2877a8469..4641a50e82d50d 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                 case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
>                 case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
>                 case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
> -                       if (BPF_OP(code) == BPF_SUB)
> -                               imm = -imm;
> -                       if (imm) {
> -                               if (imm >= -32768 && imm < 32768)
> -                                       EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
> -                               else {
> -                                       PPC_LI32(b2p[TMP_REG_1], imm);
> +                       if (imm > -32768 && imm < 32768) {
> +                               EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
> +                                       BPF_OP(code) == BPF_SUB ? IMM_L(-imm) : IMM_L(imm)));
> +                       } else {
> +                               PPC_LI32(b2p[TMP_REG_1], imm);
> +                               if (BPF_OP(code) == BPF_SUB)
> +                                       EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
> +                               else
>                                         EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
> -                               }
>                         }
>                         goto bpf_alu32_trunc;
>                 case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 7/9] powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06
  2021-10-01 21:14 ` [PATCH 7/9] powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06 Naveen N. Rao
@ 2021-10-02 17:35   ` Johan Almbladh
  0 siblings, 0 replies; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:35 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Johan reported the below crash with test_bpf on ppc64 e5500:
>
>   test_bpf: #296 ALU_END_FROM_LE 64: 0x0123456789abcdef -> 0x67452301 jited:1
>   Oops: Exception in kernel mode, sig: 4 [#1]
>   BE PAGE_SIZE=4K SMP NR_CPUS=24 QEMU e500
>   Modules linked in: test_bpf(+)
>   CPU: 0 PID: 76 Comm: insmod Not tainted 5.14.0-03771-g98c2059e008a-dirty #1
>   NIP:  8000000000061c3c LR: 80000000006dea64 CTR: 8000000000061c18
>   REGS: c0000000032d3420 TRAP: 0700   Not tainted (5.14.0-03771-g98c2059e008a-dirty)
>   MSR:  0000000080089000 <EE,ME>  CR: 88002822  XER: 20000000 IRQMASK: 0
>   <...>
>   NIP [8000000000061c3c] 0x8000000000061c3c
>   LR [80000000006dea64] .__run_one+0x104/0x17c [test_bpf]
>   Call Trace:
>    .__run_one+0x60/0x17c [test_bpf] (unreliable)
>    .test_bpf_init+0x6a8/0xdc8 [test_bpf]
>    .do_one_initcall+0x6c/0x28c
>    .do_init_module+0x68/0x28c
>    .load_module+0x2460/0x2abc
>    .__do_sys_init_module+0x120/0x18c
>    .system_call_exception+0x110/0x1b8
>    system_call_common+0xf0/0x210
>   --- interrupt: c00 at 0x101d0acc
>   <...>
>   ---[ end trace 47b2bf19090bb3d0 ]---
>
>   Illegal instruction
>
> The illegal instruction turned out to be 'ldbrx' emitted for
> BPF_FROM_[L|B]E, which was only introduced in ISA v2.06. Guard use of
> the same and implement an alternative approach for older processors.
>
> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
> Reported-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---
>  arch/powerpc/include/asm/ppc-opcode.h |  1 +
>  arch/powerpc/net/bpf_jit_comp64.c     | 22 +++++++++++++---------
>  2 files changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
> index baea657bc8687e..bca31a61e57f88 100644
> --- a/arch/powerpc/include/asm/ppc-opcode.h
> +++ b/arch/powerpc/include/asm/ppc-opcode.h
> @@ -498,6 +498,7 @@
>  #define PPC_RAW_LDX(r, base, b)                (0x7c00002a | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
>  #define PPC_RAW_LHZ(r, base, i)                (0xa0000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
>  #define PPC_RAW_LHBRX(r, base, b)      (0x7c00062c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
> +#define PPC_RAW_LWBRX(r, base, b)      (0x7c00042c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
>  #define PPC_RAW_LDBRX(r, base, b)      (0x7c000428 | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b))
>  #define PPC_RAW_STWCX(s, a, b)         (0x7c00012d | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
>  #define PPC_RAW_CMPWI(a, i)            (0x2c000000 | ___PPC_RA(a) | IMM_L(i))
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 4641a50e82d50d..097d1ecfb9f562 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -601,17 +601,21 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>                                 EMIT(PPC_RAW_MR(dst_reg, b2p[TMP_REG_1]));
>                                 break;
>                         case 64:
> -                               /*
> -                                * Way easier and faster(?) to store the value
> -                                * into stack and then use ldbrx
> -                                *
> -                                * ctx->seen will be reliable in pass2, but
> -                                * the instructions generated will remain the
> -                                * same across all passes
> -                                */
> +                               /* Store the value to stack and then use byte-reverse loads */
>                                 PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
>                                 EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx)));
> -                               EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
> +                               if (cpu_has_feature(CPU_FTR_ARCH_206)) {
> +                                       EMIT(PPC_RAW_LDBRX(dst_reg, 0, b2p[TMP_REG_1]));
> +                               } else {
> +                                       EMIT(PPC_RAW_LWBRX(dst_reg, 0, b2p[TMP_REG_1]));
> +                                       if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
> +                                               EMIT(PPC_RAW_SLDI(dst_reg, dst_reg, 32));
> +                                       EMIT(PPC_RAW_LI(b2p[TMP_REG_2], 4));
> +                                       EMIT(PPC_RAW_LWBRX(b2p[TMP_REG_2], b2p[TMP_REG_2], b2p[TMP_REG_1]));
> +                                       if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
> +                                               EMIT(PPC_RAW_SLDI(b2p[TMP_REG_2], b2p[TMP_REG_2], 32));
> +                                       EMIT(PPC_RAW_OR(dst_reg, dst_reg, b2p[TMP_REG_2]));
> +                               }
>                                 break;
>                         }
>                         break;
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 0/9] powerpc/bpf: Various fixes
  2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
                   ` (8 preceding siblings ...)
  2021-10-01 21:14 ` [PATCH 9/9] powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC Naveen N. Rao
@ 2021-10-02 17:41 ` Johan Almbladh
  2021-10-04 18:19   ` Naveen N. Rao
  9 siblings, 1 reply; 40+ messages in thread
From: Johan Almbladh @ 2021-10-02 17:41 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Nicholas Piggin, Daniel Borkmann,
	Alexei Starovoitov, Christophe Leroy, bpf, linuxppc-dev

On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
<naveen.n.rao@linux.vnet.ibm.com> wrote:
>
> Various fixes to the eBPF JIT for powerpc, thanks to some new tests
> added by Johan. This series fixes all failures in test_bpf on powerpc64.
> There are still some failures on powerpc32 to be looked into.

Great work! I have tested it on powerpc64 in QEMU, which is the same
setup that previously triggered an illegal instruction, and all tests
pass now. On powerpc32 there are still some issues left as you say.

Thanks!
Johan


>
> - Naveen
>
>
> Naveen N. Rao (8):
>   powerpc/lib: Add helper to check if offset is within conditional
>     branch range
>   powerpc/bpf: Validate branch ranges
>   powerpc/bpf: Handle large branch ranges with BPF_EXIT
>   powerpc/bpf: Fix BPF_MOD when imm == 1
>   powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
>   powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06
>   powerpc/security: Add a helper to query stf_barrier type
>   powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC
>
> Ravi Bangoria (1):
>   powerpc/bpf: Remove unused SEEN_STACK
>
>  arch/powerpc/include/asm/code-patching.h     |   1 +
>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>  arch/powerpc/include/asm/security_features.h |   5 +
>  arch/powerpc/kernel/security.c               |   5 +
>  arch/powerpc/lib/code-patching.c             |   7 +-
>  arch/powerpc/net/bpf_jit.h                   |  39 ++++---
>  arch/powerpc/net/bpf_jit64.h                 |   8 +-
>  arch/powerpc/net/bpf_jit_comp.c              |  28 ++++-
>  arch/powerpc/net/bpf_jit_comp32.c            |  10 +-
>  arch/powerpc/net/bpf_jit_comp64.c            | 113 ++++++++++++++-----
>  10 files changed, 167 insertions(+), 50 deletions(-)
>
>
> base-commit: 044c2d99d9f43c6d6fde8bed00672517dd9a5a57
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range
  2021-10-01 21:14 ` [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range Naveen N. Rao
  2021-10-01 21:37   ` Song Liu
@ 2021-10-03  7:50   ` Christophe Leroy
  2021-10-04 18:03     ` Naveen N. Rao
  1 sibling, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-03  7:50 UTC (permalink / raw)
  To: Naveen N. Rao, Michael Ellerman, Nicholas Piggin,
	Daniel Borkmann, Alexei Starovoitov, Johan Almbladh
  Cc: bpf, linuxppc-dev



Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
> Add a helper to check if a given offset is within the branch range for a
> powerpc conditional branch instruction, and update some sites to use the
> new helper.
> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>   arch/powerpc/include/asm/code-patching.h | 1 +
>   arch/powerpc/lib/code-patching.c         | 7 ++++++-
>   arch/powerpc/net/bpf_jit.h               | 7 +------
>   3 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index a95f63788c6b14..4ba834599c4d4c 100644
> --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -23,6 +23,7 @@
>   #define BRANCH_ABSOLUTE	0x2
>   
>   bool is_offset_in_branch_range(long offset);
> +bool is_offset_in_cond_branch_range(long offset);
>   int create_branch(struct ppc_inst *instr, const u32 *addr,
>   		  unsigned long target, int flags);
>   int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index f9a3019e37b43c..e2342b9a1ab9c9 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -228,6 +228,11 @@ bool is_offset_in_branch_range(long offset)
>   	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
>   }
>   
> +bool is_offset_in_cond_branch_range(long offset)
> +{
> +	return offset >= -0x8000 && offset <= 0x7FFF && !(offset & 0x3);
> +}

Would be better without capital letters in numbers, in extenso 0x7fff 
instead of 0x7FFF

> +
>   /*
>    * Helper to check if a given instruction is a conditional branch
>    * Derived from the conditional checks in analyse_instr()
> @@ -280,7 +285,7 @@ int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
>   		offset = offset - (unsigned long)addr;
>   
>   	/* Check we can represent the target in the instruction format */
> -	if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
> +	if (!is_offset_in_cond_branch_range(offset))
>   		return 1;
>   
>   	/* Mask out the flags and target, so they don't step on each other. */
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 99fad093f43ec1..935ea95b66359e 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -78,11 +78,6 @@
>   #define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
>   #endif
>   
> -static inline bool is_nearbranch(int offset)
> -{
> -	return (offset < 32768) && (offset >= -32768);
> -}
> -
>   /*
>    * The fly in the ointment of code size changing from pass to pass is
>    * avoided by padding the short branch case with a NOP.	 If code size differs
> @@ -91,7 +86,7 @@ static inline bool is_nearbranch(int offset)
>    * state.
>    */
>   #define PPC_BCC(cond, dest)	do {					      \
> -		if (is_nearbranch((dest) - (ctx->idx * 4))) {		      \
> +		if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) {	\
>   			PPC_BCC_SHORT(cond, dest);			      \
>   			EMIT(PPC_RAW_NOP());				      \
>   		} else {						      \
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/9] powerpc/bpf: Validate branch ranges
  2021-10-01 21:14 ` [PATCH 2/9] powerpc/bpf: Validate branch ranges Naveen N. Rao
  2021-10-01 21:45   ` Song Liu
  2021-10-02 17:29   ` Johan Almbladh
@ 2021-10-03  7:54   ` Christophe Leroy
  2021-10-04 18:11     ` Naveen N. Rao
  2 siblings, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-03  7:54 UTC (permalink / raw)
  To: Naveen N. Rao, Michael Ellerman, Nicholas Piggin,
	Daniel Borkmann, Alexei Starovoitov, Johan Almbladh
  Cc: bpf, linuxppc-dev



Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
> Add checks to ensure that we never emit branch instructions with
> truncated branch offsets.
> 
> Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit.h        | 26 ++++++++++++++++++++------
>   arch/powerpc/net/bpf_jit_comp.c   |  6 +++++-
>   arch/powerpc/net/bpf_jit_comp32.c |  8 ++++++--
>   arch/powerpc/net/bpf_jit_comp64.c |  8 ++++++--
>   4 files changed, 37 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 935ea95b66359e..7e9b978b768ed9 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -24,16 +24,30 @@
>   #define EMIT(instr)		PLANT_INSTR(image, ctx->idx, instr)
>   
>   /* Long jump; (unconditional 'branch') */
> -#define PPC_JMP(dest)		EMIT(PPC_INST_BRANCH |			      \
> -				     (((dest) - (ctx->idx * 4)) & 0x03fffffc))
> +#define PPC_JMP(dest)							      \
> +	do {								      \
> +		long offset = (long)(dest) - (ctx->idx * 4);		      \
> +		if (!is_offset_in_branch_range(offset)) {		      \
> +			pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);			\

Does it really deserves a KERN_ERR ?
Isn't that something that can trigger with a userland request ?

> +			return -ERANGE;					      \
> +		}							      \
> +		EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));		      \
> +	} while (0)
> +
>   /* blr; (unconditional 'branch' with link) to absolute address */
>   #define PPC_BL_ABS(dest)	EMIT(PPC_INST_BL |			      \
>   				     (((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc))
>   /* "cond" here covers BO:BI fields. */
> -#define PPC_BCC_SHORT(cond, dest)	EMIT(PPC_INST_BRANCH_COND |	      \
> -					     (((cond) & 0x3ff) << 16) |	      \
> -					     (((dest) - (ctx->idx * 4)) &     \
> -					      0xfffc))
> +#define PPC_BCC_SHORT(cond, dest)					      \
> +	do {								      \
> +		long offset = (long)(dest) - (ctx->idx * 4);		      \
> +		if (!is_offset_in_cond_branch_range(offset)) {		      \
> +			pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);		\

Same

> +			return -ERANGE;					      \
> +		}							      \
> +		EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));					\
> +	} while (0)
> +
>   /* Sign-extended 32-bit immediate load */
>   #define PPC_LI32(d, i)		do {					      \
>   		if ((int)(uintptr_t)(i) >= -32768 &&			      \
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 53aefee3fe70be..fcbf7a917c566e 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -210,7 +210,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>   		/* Now build the prologue, body code & epilogue for real. */
>   		cgctx.idx = 0;
>   		bpf_jit_build_prologue(code_base, &cgctx);
> -		bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
> +		if (bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass)) {
> +			bpf_jit_binary_free(bpf_hdr);
> +			fp = org_fp;
> +			goto out_addrs;
> +		}
>   		bpf_jit_build_epilogue(code_base, &cgctx);
>   
>   		if (bpf_jit_enable > 1)
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index beb12cbc8c2994..a74d52204f8da2 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -200,7 +200,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
>   	}
>   }
>   
> -static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
> +static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
>   {
>   	/*
>   	 * By now, the eBPF program has already setup parameters in r3-r6
> @@ -261,7 +261,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
>   	bpf_jit_emit_common_epilogue(image, ctx);
>   
>   	EMIT(PPC_RAW_BCTR());
> +
>   	/* out: */
> +	return 0;
>   }
>   
>   /* Assemble the body code between the prologue & epilogue */
> @@ -1090,7 +1092,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   		 */
>   		case BPF_JMP | BPF_TAIL_CALL:
>   			ctx->seen |= SEEN_TAILCALL;
> -			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +			if (ret < 0)
> +				return ret;
>   			break;
>   
>   		default:
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index b87a63dba9c8fb..f06c62089b1457 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -206,7 +206,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
>   	EMIT(PPC_RAW_BCTRL());
>   }
>   
> -static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
> +static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
>   {
>   	/*
>   	 * By now, the eBPF program has already setup parameters in r3, r4 and r5
> @@ -267,7 +267,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
>   	bpf_jit_emit_common_epilogue(image, ctx);
>   
>   	EMIT(PPC_RAW_BCTR());
> +
>   	/* out: */
> +	return 0;
>   }
>   
>   /* Assemble the body code between the prologue & epilogue */
> @@ -993,7 +995,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   		 */
>   		case BPF_JMP | BPF_TAIL_CALL:
>   			ctx->seen |= SEEN_TAILCALL;
> -			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
> +			if (ret < 0)
> +				return ret;
>   			break;
>   
>   		default:
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-01 21:14 ` [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK Naveen N. Rao
  2021-10-01 21:47   ` Song Liu
  2021-10-02 17:30   ` Johan Almbladh
@ 2021-10-03  7:55   ` Christophe Leroy
  2021-10-04 18:11     ` Naveen N. Rao
  2 siblings, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-03  7:55 UTC (permalink / raw)
  To: Naveen N. Rao, Michael Ellerman, Nicholas Piggin,
	Daniel Borkmann, Alexei Starovoitov, Johan Almbladh
  Cc: bpf, linuxppc-dev



Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
> From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
> 
> SEEN_STACK is unused on PowerPC. Remove it. Also, have
> SEEN_TAILCALL use 0x40000000.

Why change SEEN_TAILCALL ? Would it be a problem to leave it as is ?

> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> ---
>   arch/powerpc/net/bpf_jit.h | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 7e9b978b768ed9..89bd744c2bffd4 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -125,8 +125,7 @@
>   #define COND_LE		(CR0_GT | COND_CMP_FALSE)
>   
>   #define SEEN_FUNC	0x20000000 /* might call external helpers */
> -#define SEEN_STACK	0x40000000 /* uses BPF stack */
> -#define SEEN_TAILCALL	0x80000000 /* uses tail calls */
> +#define SEEN_TAILCALL	0x40000000 /* uses tail calls */
>   
>   #define SEEN_VREG_MASK	0x1ff80000 /* Volatile registers r3-r12 */
>   #define SEEN_NVREG_MASK	0x0003ffff /* Non volatile registers r14-r31 */
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-01 21:14 ` [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT Naveen N. Rao
  2021-10-01 21:53   ` Song Liu
  2021-10-02 17:31   ` Johan Almbladh
@ 2021-10-03  7:59   ` Christophe Leroy
  2021-10-04 18:24     ` Naveen N. Rao
  2 siblings, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-03  7:59 UTC (permalink / raw)
  To: Naveen N. Rao, Michael Ellerman, Nicholas Piggin,
	Daniel Borkmann, Alexei Starovoitov, Johan Almbladh
  Cc: bpf, linuxppc-dev



Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
> In some scenarios, it is possible that the program epilogue is outside
> the branch range for a BPF_EXIT instruction. Instead of rejecting such
> programs, emit an indirect branch. We track the size of the bpf program
> emitted after the initial run and do a second pass since BPF_EXIT can
> end up emitting different number of instructions depending on the
> program size.
> 
> Suggested-by: Jordan Niethe <jniethe5@gmail.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit.h        |  3 +++
>   arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
>   arch/powerpc/net/bpf_jit_comp32.c |  2 +-
>   arch/powerpc/net/bpf_jit_comp64.c |  2 +-
>   4 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 89bd744c2bffd4..4023de1698b9f5 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -126,6 +126,7 @@
>   
>   #define SEEN_FUNC	0x20000000 /* might call external helpers */
>   #define SEEN_TAILCALL	0x40000000 /* uses tail calls */
> +#define SEEN_BIG_PROG	0x80000000 /* large prog, >32MB */
>   
>   #define SEEN_VREG_MASK	0x1ff80000 /* Volatile registers r3-r12 */
>   #define SEEN_NVREG_MASK	0x0003ffff /* Non volatile registers r14-r31 */
> @@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
>   void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
>   void bpf_jit_realloc_regs(struct codegen_context *ctx);
> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
> +					int tmp_reg, unsigned long exit_addr);
>   
>   #endif
>   
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index fcbf7a917c566e..3204872fbf2738 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
>   	return 0;
>   }
>   
> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
> +					int tmp_reg, unsigned long exit_addr)
> +{
> +	if (!(ctx->seen & SEEN_BIG_PROG) && is_offset_in_branch_range(exit_addr)) {
> +		PPC_JMP(exit_addr);
> +	} else {
> +		ctx->seen |= SEEN_BIG_PROG;
> +		PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
> +		EMIT(PPC_RAW_MTCTR(tmp_reg));
> +		EMIT(PPC_RAW_BCTR());
> +	}
> +
> +	return 0;
> +}
> +
>   struct powerpc64_jit_data {
>   	struct bpf_binary_header *header;
>   	u32 *addrs;
> @@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>   		goto out_addrs;
>   	}
>   
> +	if (!is_offset_in_branch_range((long)cgctx.idx * 4))
> +		cgctx.seen |= SEEN_BIG_PROG;
> +
>   	/*
>   	 * If we have seen a tail call, we need a second pass.
>   	 * This is because bpf_jit_emit_common_epilogue() is called
>   	 * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
> +	 * We also need a second pass if we ended up with too large
> +	 * a program so as to fix branches.
>   	 */
> -	if (cgctx.seen & SEEN_TAILCALL) {
> +	if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
>   		cgctx.idx = 0;
>   		if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
>   			fp = org_fp;
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index a74d52204f8da2..d2a67574a23066 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   			 * we'll just fall through to the epilogue.
>   			 */
>   			if (i != flen - 1)
> -				PPC_JMP(exit_addr);
> +				bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);

On ppc32, if you use tmp_reg you must flag it. But I think you could use 
r0 instead.

>   			/* else fall through to the epilogue */
>   			break;
>   
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index f06c62089b1457..3351a866ef6207 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -761,7 +761,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   			 * we'll just fall through to the epilogue.
>   			 */
>   			if (i != flen - 1)
> -				PPC_JMP(exit_addr);
> +				bpf_jit_emit_exit_insn(image, ctx, b2p[TMP_REG_1], exit_addr);
>   			/* else fall through to the epilogue */
>   			break;
>   
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  2021-10-01 21:14 ` [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000 Naveen N. Rao
  2021-10-01 22:01   ` Song Liu
  2021-10-02 17:33   ` Johan Almbladh
@ 2021-10-03  8:07   ` Christophe Leroy
  2021-10-04 18:18     ` Naveen N. Rao
  2 siblings, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-03  8:07 UTC (permalink / raw)
  To: Naveen N. Rao, Michael Ellerman, Nicholas Piggin,
	Daniel Borkmann, Alexei Starovoitov, Johan Almbladh
  Cc: bpf, linuxppc-dev



Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
> We aren't handling subtraction involving an immediate value of
> 0x80000000 properly. Fix the same.
> 
> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
>   1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index ffb7a2877a8469..4641a50e82d50d 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
>   		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
>   		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
> -			if (BPF_OP(code) == BPF_SUB)
> -				imm = -imm;
> -			if (imm) {
> -				if (imm >= -32768 && imm < 32768)
> -					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
> -				else {
> -					PPC_LI32(b2p[TMP_REG_1], imm);
> +			if (imm > -32768 && imm < 32768) {
> +				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
> +					BPF_OP(code) == BPF_SUB ? IMM_L(-imm) : IMM_L(imm)));
> +			} else {
> +				PPC_LI32(b2p[TMP_REG_1], imm);
> +				if (BPF_OP(code) == BPF_SUB)
> +					EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
> +				else
>   					EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
> -				}
>   			}
>   			goto bpf_alu32_trunc;

There is now so few code common to both BPF_ADD and BPF_SUB that you 
should make them different cases.

While at it, why not also use ADDIS if imm is 32 bits ? That would be an 
ADDIS/ADDI instead of LIS/ORI/ADD

>   		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range
  2021-10-01 21:37   ` Song Liu
@ 2021-10-04 18:02     ` Naveen N. Rao
  0 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:02 UTC (permalink / raw)
  To: Song Liu
  Cc: Alexei Starovoitov, bpf, Daniel Borkmann, Johan Almbladh,
	linuxppc-dev, Nicholas Piggin

Hi Song,
Thanks for the reviews.


Song Liu wrote:
> On Fri, Oct 1, 2021 at 2:16 PM Naveen N. Rao
> <naveen.n.rao@linux.vnet.ibm.com> wrote:
>>
>> Add a helper to check if a given offset is within the branch range for a
>> powerpc conditional branch instruction, and update some sites to use the
>> new helper.
>>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> 
> Acked-by: Song Liu <songliubraving@fb.com>
> 
> With one nitpick:
> 
>> ---
>>  arch/powerpc/include/asm/code-patching.h | 1 +
>>  arch/powerpc/lib/code-patching.c         | 7 ++++++-
>>  arch/powerpc/net/bpf_jit.h               | 7 +------
>>  3 files changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>> index a95f63788c6b14..4ba834599c4d4c 100644
>> --- a/arch/powerpc/include/asm/code-patching.h
>> +++ b/arch/powerpc/include/asm/code-patching.h
>> @@ -23,6 +23,7 @@
>>  #define BRANCH_ABSOLUTE        0x2
>>
>>  bool is_offset_in_branch_range(long offset);
>> +bool is_offset_in_cond_branch_range(long offset);
>>  int create_branch(struct ppc_inst *instr, const u32 *addr,
>>                   unsigned long target, int flags);
>>  int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index f9a3019e37b43c..e2342b9a1ab9c9 100644
>> --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -228,6 +228,11 @@ bool is_offset_in_branch_range(long offset)
>>         return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
>>  }
>>
>> +bool is_offset_in_cond_branch_range(long offset)
>> +{
>> +       return offset >= -0x8000 && offset <= 0x7FFF && !(offset & 0x3);
>> +}
> 
> Why not inline this one?

Good point. This was modeled after the existing 
is_offset_in_branch_range(), and I guess both of those helpers can be 
inlined. I'll do a separate patch for that.


- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range
  2021-10-03  7:50   ` Christophe Leroy
@ 2021-10-04 18:03     ` Naveen N. Rao
  0 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:03 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Hi Christophe,
Thanks for the reviews.


Christophe Leroy wrote:
> 
> 
> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>> Add a helper to check if a given offset is within the branch range for a
>> powerpc conditional branch instruction, and update some sites to use the
>> new helper.
>> 
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/code-patching.h | 1 +
>>   arch/powerpc/lib/code-patching.c         | 7 ++++++-
>>   arch/powerpc/net/bpf_jit.h               | 7 +------
>>   3 files changed, 8 insertions(+), 7 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>> index a95f63788c6b14..4ba834599c4d4c 100644
>> --- a/arch/powerpc/include/asm/code-patching.h
>> +++ b/arch/powerpc/include/asm/code-patching.h
>> @@ -23,6 +23,7 @@
>>   #define BRANCH_ABSOLUTE	0x2
>>   
>>   bool is_offset_in_branch_range(long offset);
>> +bool is_offset_in_cond_branch_range(long offset);
>>   int create_branch(struct ppc_inst *instr, const u32 *addr,
>>   		  unsigned long target, int flags);
>>   int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index f9a3019e37b43c..e2342b9a1ab9c9 100644
>> --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -228,6 +228,11 @@ bool is_offset_in_branch_range(long offset)
>>   	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
>>   }
>>   
>> +bool is_offset_in_cond_branch_range(long offset)
>> +{
>> +	return offset >= -0x8000 && offset <= 0x7FFF && !(offset & 0x3);
>> +}
> 
> Would be better without capital letters in numbers, in extenso 0x7fff 
> instead of 0x7FFF

Ack.

- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/9] powerpc/bpf: Validate branch ranges
  2021-10-03  7:54   ` Christophe Leroy
@ 2021-10-04 18:11     ` Naveen N. Rao
  0 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:11 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Christophe Leroy wrote:
> 
> 
> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>> Add checks to ensure that we never emit branch instructions with
>> truncated branch offsets.
>> 
>> Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/net/bpf_jit.h        | 26 ++++++++++++++++++++------
>>   arch/powerpc/net/bpf_jit_comp.c   |  6 +++++-
>>   arch/powerpc/net/bpf_jit_comp32.c |  8 ++++++--
>>   arch/powerpc/net/bpf_jit_comp64.c |  8 ++++++--
>>   4 files changed, 37 insertions(+), 11 deletions(-)
>> 
>> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
>> index 935ea95b66359e..7e9b978b768ed9 100644
>> --- a/arch/powerpc/net/bpf_jit.h
>> +++ b/arch/powerpc/net/bpf_jit.h
>> @@ -24,16 +24,30 @@
>>   #define EMIT(instr)		PLANT_INSTR(image, ctx->idx, instr)
>>   
>>   /* Long jump; (unconditional 'branch') */
>> -#define PPC_JMP(dest)		EMIT(PPC_INST_BRANCH |			      \
>> -				     (((dest) - (ctx->idx * 4)) & 0x03fffffc))
>> +#define PPC_JMP(dest)							      \
>> +	do {								      \
>> +		long offset = (long)(dest) - (ctx->idx * 4);		      \
>> +		if (!is_offset_in_branch_range(offset)) {		      \
>> +			pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);			\
> 
> Does it really deserves a KERN_ERR ?

The intent is to ensure that we handle this when JIT'ing the BPF
instruction. One of the subsequent patches fixes the only scenario where 
we can hit this today. In practice, we should never hit this and if we 
do see this, then it is a bug with the JIT.

> Isn't that something that can trigger with a userland request ?

This can't be triggered by unprivileged BPF programs since those are 
limited to 4096 BPF instructions. You need root privileges to load large 
enough BPF programs that can trigger out of range branches.


- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-03  7:55   ` Christophe Leroy
@ 2021-10-04 18:11     ` Naveen N. Rao
  2021-10-05  5:50       ` Christophe Leroy
  0 siblings, 1 reply; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:11 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Christophe Leroy wrote:
> 
> 
> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>> From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>> 
>> SEEN_STACK is unused on PowerPC. Remove it. Also, have
>> SEEN_TAILCALL use 0x40000000.
> 
> Why change SEEN_TAILCALL ? Would it be a problem to leave it as is ?
> 
>> 
>> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>

I prefer the bit usage to be contiguous. Changing SEEN_TAILCALL isn't a 
problem either.


- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  2021-10-03  8:07   ` Christophe Leroy
@ 2021-10-04 18:18     ` Naveen N. Rao
  2021-10-05  5:40       ` Christophe Leroy
  0 siblings, 1 reply; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:18 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Christophe Leroy wrote:
> 
> 
> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>> We aren't handling subtraction involving an immediate value of
>> 0x80000000 properly. Fix the same.
>> 
>> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
>>   1 file changed, 8 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
>> index ffb7a2877a8469..4641a50e82d50d 100644
>> --- a/arch/powerpc/net/bpf_jit_comp64.c
>> +++ b/arch/powerpc/net/bpf_jit_comp64.c
>> @@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>>   		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
>>   		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
>>   		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
>> -			if (BPF_OP(code) == BPF_SUB)
>> -				imm = -imm;
>> -			if (imm) {
>> -				if (imm >= -32768 && imm < 32768)
>> -					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
>> -				else {
>> -					PPC_LI32(b2p[TMP_REG_1], imm);
>> +			if (imm > -32768 && imm < 32768) {
>> +				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
>> +					BPF_OP(code) == BPF_SUB ? IMM_L(-imm) : IMM_L(imm)));
>> +			} else {
>> +				PPC_LI32(b2p[TMP_REG_1], imm);
>> +				if (BPF_OP(code) == BPF_SUB)
>> +					EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
>> +				else
>>   					EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
>> -				}
>>   			}
>>   			goto bpf_alu32_trunc;
> 
> There is now so few code common to both BPF_ADD and BPF_SUB that you 
> should make them different cases.
> 
> While at it, why not also use ADDIS if imm is 32 bits ? That would be an 
> ADDIS/ADDI instead of LIS/ORI/ADD

Sure. I wanted to limit the change for this fix. We can do a separate 
patch to optimize code generation for BPF_ADD.


- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 0/9] powerpc/bpf: Various fixes
  2021-10-02 17:41 ` [PATCH 0/9] powerpc/bpf: Various fixes Johan Almbladh
@ 2021-10-04 18:19   ` Naveen N. Rao
  0 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:19 UTC (permalink / raw)
  To: Johan Almbladh
  Cc: Alexei Starovoitov, bpf, Daniel Borkmann, linuxppc-dev, Nicholas Piggin

Hi Johan,

Johan Almbladh wrote:
> On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
> <naveen.n.rao@linux.vnet.ibm.com> wrote:
>>
>> Various fixes to the eBPF JIT for powerpc, thanks to some new tests
>> added by Johan. This series fixes all failures in test_bpf on powerpc64.
>> There are still some failures on powerpc32 to be looked into.
> 
> Great work! I have tested it on powerpc64 in QEMU, which is the same
> setup that previously triggered an illegal instruction, and all tests
> pass now. On powerpc32 there are still some issues left as you say.

Thanks for the review, and the test!


- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-03  7:59   ` Christophe Leroy
@ 2021-10-04 18:24     ` Naveen N. Rao
  2021-10-05  5:46       ` Christophe Leroy
  0 siblings, 1 reply; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-04 18:24 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Christophe Leroy wrote:
> 
> 
> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>> In some scenarios, it is possible that the program epilogue is outside
>> the branch range for a BPF_EXIT instruction. Instead of rejecting such
>> programs, emit an indirect branch. We track the size of the bpf program
>> emitted after the initial run and do a second pass since BPF_EXIT can
>> end up emitting different number of instructions depending on the
>> program size.
>> 
>> Suggested-by: Jordan Niethe <jniethe5@gmail.com>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/net/bpf_jit.h        |  3 +++
>>   arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
>>   arch/powerpc/net/bpf_jit_comp32.c |  2 +-
>>   arch/powerpc/net/bpf_jit_comp64.c |  2 +-
>>   4 files changed, 26 insertions(+), 3 deletions(-)
>> 
>> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
>> index 89bd744c2bffd4..4023de1698b9f5 100644
>> --- a/arch/powerpc/net/bpf_jit.h
>> +++ b/arch/powerpc/net/bpf_jit.h
>> @@ -126,6 +126,7 @@
>>   
>>   #define SEEN_FUNC	0x20000000 /* might call external helpers */
>>   #define SEEN_TAILCALL	0x40000000 /* uses tail calls */
>> +#define SEEN_BIG_PROG	0x80000000 /* large prog, >32MB */
>>   
>>   #define SEEN_VREG_MASK	0x1ff80000 /* Volatile registers r3-r12 */
>>   #define SEEN_NVREG_MASK	0x0003ffff /* Non volatile registers r14-r31 */
>> @@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>>   void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
>>   void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
>>   void bpf_jit_realloc_regs(struct codegen_context *ctx);
>> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
>> +					int tmp_reg, unsigned long exit_addr);
>>   
>>   #endif
>>   
>> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
>> index fcbf7a917c566e..3204872fbf2738 100644
>> --- a/arch/powerpc/net/bpf_jit_comp.c
>> +++ b/arch/powerpc/net/bpf_jit_comp.c
>> @@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
>>   	return 0;
>>   }
>>   
>> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
>> +					int tmp_reg, unsigned long exit_addr)
>> +{
>> +	if (!(ctx->seen & SEEN_BIG_PROG) && is_offset_in_branch_range(exit_addr)) {
>> +		PPC_JMP(exit_addr);
>> +	} else {
>> +		ctx->seen |= SEEN_BIG_PROG;
>> +		PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
>> +		EMIT(PPC_RAW_MTCTR(tmp_reg));
>> +		EMIT(PPC_RAW_BCTR());
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   struct powerpc64_jit_data {
>>   	struct bpf_binary_header *header;
>>   	u32 *addrs;
>> @@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>>   		goto out_addrs;
>>   	}
>>   
>> +	if (!is_offset_in_branch_range((long)cgctx.idx * 4))
>> +		cgctx.seen |= SEEN_BIG_PROG;
>> +
>>   	/*
>>   	 * If we have seen a tail call, we need a second pass.
>>   	 * This is because bpf_jit_emit_common_epilogue() is called
>>   	 * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
>> +	 * We also need a second pass if we ended up with too large
>> +	 * a program so as to fix branches.
>>   	 */
>> -	if (cgctx.seen & SEEN_TAILCALL) {
>> +	if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
>>   		cgctx.idx = 0;
>>   		if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
>>   			fp = org_fp;
>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
>> index a74d52204f8da2..d2a67574a23066 100644
>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>> @@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>>   			 * we'll just fall through to the epilogue.
>>   			 */
>>   			if (i != flen - 1)
>> -				PPC_JMP(exit_addr);
>> +				bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);
> 
> On ppc32, if you use tmp_reg you must flag it. But I think you could use 
> r0 instead.

Indeed. Can we drop tracking of the temp registers and using them while
remapping registers? Are you seeing significant benefits with re-use of 
those temp registers?

- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
  2021-10-04 18:18     ` Naveen N. Rao
@ 2021-10-05  5:40       ` Christophe Leroy
  0 siblings, 0 replies; 40+ messages in thread
From: Christophe Leroy @ 2021-10-05  5:40 UTC (permalink / raw)
  To: Naveen N. Rao, Alexei Starovoitov, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev



Le 04/10/2021 à 20:18, Naveen N. Rao a écrit :
> Christophe Leroy wrote:
>>
>>
>> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>>> We aren't handling subtraction involving an immediate value of
>>> 0x80000000 properly. Fix the same.
>>>
>>> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for 
>>> extended BPF")
>>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>>> ---
>>>   arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
>>>   1 file changed, 8 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
>>> b/arch/powerpc/net/bpf_jit_comp64.c
>>> index ffb7a2877a8469..4641a50e82d50d 100644
>>> --- a/arch/powerpc/net/bpf_jit_comp64.c
>>> +++ b/arch/powerpc/net/bpf_jit_comp64.c
>>> @@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
>>> *image, struct codegen_context *
>>>           case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
>>>           case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
>>>           case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
>>> -            if (BPF_OP(code) == BPF_SUB)
>>> -                imm = -imm;
>>> -            if (imm) {
>>> -                if (imm >= -32768 && imm < 32768)
>>> -                    EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
>>> -                else {
>>> -                    PPC_LI32(b2p[TMP_REG_1], imm);
>>> +            if (imm > -32768 && imm < 32768) {
>>> +                EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
>>> +                    BPF_OP(code) == BPF_SUB ? IMM_L(-imm) : 
>>> IMM_L(imm)));
>>> +            } else {
>>> +                PPC_LI32(b2p[TMP_REG_1], imm);
>>> +                if (BPF_OP(code) == BPF_SUB)
>>> +                    EMIT(PPC_RAW_SUB(dst_reg, dst_reg, 
>>> b2p[TMP_REG_1]));
>>> +                else
>>>                       EMIT(PPC_RAW_ADD(dst_reg, dst_reg, 
>>> b2p[TMP_REG_1]));
>>> -                }
>>>               }
>>>               goto bpf_alu32_trunc;
>>
>> There is now so few code common to both BPF_ADD and BPF_SUB that you 
>> should make them different cases.
>>
>> While at it, why not also use ADDIS if imm is 32 bits ? That would be 
>> an ADDIS/ADDI instead of LIS/ORI/ADD
> 
> Sure. I wanted to limit the change for this fix. We can do a separate 
> patch to optimize code generation for BPF_ADD.
> 

Sure, this second part was just a thought, I agree it should be another 
patch.

My main comment here is to split stuff and make it a different case, I 
don't think it increases the change much, and IMO it is easier to read:

diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
b/arch/powerpc/net/bpf_jit_comp64.c
index ffb7a2877a84..39226d88c558 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -330,11 +330,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
*image, struct codegen_context *
  			EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
  			goto bpf_alu32_trunc;
  		case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
-		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
  		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
-		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
-			if (BPF_OP(code) == BPF_SUB)
-				imm = -imm;
  			if (imm) {
  				if (imm >= -32768 && imm < 32768)
  					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
@@ -344,6 +340,17 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
*image, struct codegen_context *
  				}
  			}
  			goto bpf_alu32_trunc;
+		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
+			if (imm) {
+				if (-imm >= -32768 && -imm < 32768) {
+					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(-imm)));
+				} else {
+					PPC_LI32(b2p[TMP_REG_1], imm);
+					EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
+				}
+			}
+			goto bpf_alu32_trunc;
  		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
  		case BPF_ALU64 | BPF_MUL | BPF_X: /* dst *= src */
  			if (BPF_CLASS(code) == BPF_ALU)

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-04 18:24     ` Naveen N. Rao
@ 2021-10-05  5:46       ` Christophe Leroy
  2022-01-07 11:46         ` Naveen N. Rao
  0 siblings, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-05  5:46 UTC (permalink / raw)
  To: Naveen N. Rao, Alexei Starovoitov, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev



Le 04/10/2021 à 20:24, Naveen N. Rao a écrit :
> Christophe Leroy wrote:
>>
>>
>> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>>> In some scenarios, it is possible that the program epilogue is outside
>>> the branch range for a BPF_EXIT instruction. Instead of rejecting such
>>> programs, emit an indirect branch. We track the size of the bpf program
>>> emitted after the initial run and do a second pass since BPF_EXIT can
>>> end up emitting different number of instructions depending on the
>>> program size.
>>>
>>> Suggested-by: Jordan Niethe <jniethe5@gmail.com>
>>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>>> ---
>>>   arch/powerpc/net/bpf_jit.h        |  3 +++
>>>   arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
>>>   arch/powerpc/net/bpf_jit_comp32.c |  2 +-
>>>   arch/powerpc/net/bpf_jit_comp64.c |  2 +-
>>>   4 files changed, 26 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
>>> index 89bd744c2bffd4..4023de1698b9f5 100644
>>> --- a/arch/powerpc/net/bpf_jit.h
>>> +++ b/arch/powerpc/net/bpf_jit.h
>>> @@ -126,6 +126,7 @@
>>>   #define SEEN_FUNC    0x20000000 /* might call external helpers */
>>>   #define SEEN_TAILCALL    0x40000000 /* uses tail calls */
>>> +#define SEEN_BIG_PROG    0x80000000 /* large prog, >32MB */
>>>   #define SEEN_VREG_MASK    0x1ff80000 /* Volatile registers r3-r12 */
>>>   #define SEEN_NVREG_MASK    0x0003ffff /* Non volatile registers 
>>> r14-r31 */
>>> @@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
>>> *image, struct codegen_context *
>>>   void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
>>>   void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
>>>   void bpf_jit_realloc_regs(struct codegen_context *ctx);
>>> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
>>> +                    int tmp_reg, unsigned long exit_addr);
>>>   #endif
>>> diff --git a/arch/powerpc/net/bpf_jit_comp.c 
>>> b/arch/powerpc/net/bpf_jit_comp.c
>>> index fcbf7a917c566e..3204872fbf2738 100644
>>> --- a/arch/powerpc/net/bpf_jit_comp.c
>>> +++ b/arch/powerpc/net/bpf_jit_comp.c
>>> @@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct 
>>> bpf_prog *fp, u32 *image,
>>>       return 0;
>>>   }
>>> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
>>> +                    int tmp_reg, unsigned long exit_addr)
>>> +{
>>> +    if (!(ctx->seen & SEEN_BIG_PROG) && 
>>> is_offset_in_branch_range(exit_addr)) {
>>> +        PPC_JMP(exit_addr);
>>> +    } else {
>>> +        ctx->seen |= SEEN_BIG_PROG;
>>> +        PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
>>> +        EMIT(PPC_RAW_MTCTR(tmp_reg));
>>> +        EMIT(PPC_RAW_BCTR());
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>   struct powerpc64_jit_data {
>>>       struct bpf_binary_header *header;
>>>       u32 *addrs;
>>> @@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct 
>>> bpf_prog *fp)
>>>           goto out_addrs;
>>>       }
>>> +    if (!is_offset_in_branch_range((long)cgctx.idx * 4))
>>> +        cgctx.seen |= SEEN_BIG_PROG;
>>> +
>>>       /*
>>>        * If we have seen a tail call, we need a second pass.
>>>        * This is because bpf_jit_emit_common_epilogue() is called
>>>        * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
>>> +     * We also need a second pass if we ended up with too large
>>> +     * a program so as to fix branches.
>>>        */
>>> -    if (cgctx.seen & SEEN_TAILCALL) {
>>> +    if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
>>>           cgctx.idx = 0;
>>>           if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
>>>               fp = org_fp;
>>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c 
>>> b/arch/powerpc/net/bpf_jit_comp32.c
>>> index a74d52204f8da2..d2a67574a23066 100644
>>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>>> @@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
>>> *image, struct codegen_context *
>>>                * we'll just fall through to the epilogue.
>>>                */
>>>               if (i != flen - 1)
>>> -                PPC_JMP(exit_addr);
>>> +                bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);
>>
>> On ppc32, if you use tmp_reg you must flag it. But I think you could 
>> use r0 instead.
> 
> Indeed. Can we drop tracking of the temp registers and using them while
> remapping registers? Are you seeing significant benefits with re-use of 
> those temp registers?
> 

I'm not sure to follow you.

On ppc32, all volatile registers are used for function arguments, so 
temp registers are necessarily non-volatile so we track them as all 
non-volatile registers we use.

I think saving on stack only the non-volatile registers we use provides 
real benefit, otherwise you wouldn't have implemented it would you ?

Anyway here you should use _R0 instead of tmp_reg.

Christophe

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-04 18:11     ` Naveen N. Rao
@ 2021-10-05  5:50       ` Christophe Leroy
  2021-10-05 20:22         ` Naveen N. Rao
  0 siblings, 1 reply; 40+ messages in thread
From: Christophe Leroy @ 2021-10-05  5:50 UTC (permalink / raw)
  To: Naveen N. Rao, Alexei Starovoitov, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev



Le 04/10/2021 à 20:11, Naveen N. Rao a écrit :
> Christophe Leroy wrote:
>>
>>
>> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>>> From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>>>
>>> SEEN_STACK is unused on PowerPC. Remove it. Also, have
>>> SEEN_TAILCALL use 0x40000000.
>>
>> Why change SEEN_TAILCALL ? Would it be a problem to leave it as is ?
>>
>>>
>>> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>>> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> 
> I prefer the bit usage to be contiguous. Changing SEEN_TAILCALL isn't a 
> problem either.
> 

Well you are adding SEEN_BIG_PROG in following patch so it would still 
be contiguous at the end.

I don't really mind but I thought it would be less churn to just leave 
SEEN_TAILCALL as is and re-use 0x40000000 for SEEN_BIG_PROG.

Anyway

Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK
  2021-10-05  5:50       ` Christophe Leroy
@ 2021-10-05 20:22         ` Naveen N. Rao
  0 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2021-10-05 20:22 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Christophe Leroy wrote:
> 
> 
> Le 04/10/2021 à 20:11, Naveen N. Rao a écrit :
>> Christophe Leroy wrote:
>>>
>>>
>>> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>>>> From: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>>>>
>>>> SEEN_STACK is unused on PowerPC. Remove it. Also, have
>>>> SEEN_TAILCALL use 0x40000000.
>>>
>>> Why change SEEN_TAILCALL ? Would it be a problem to leave it as is ?
>>>
>>>>
>>>> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>>>> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
>> 
>> I prefer the bit usage to be contiguous. Changing SEEN_TAILCALL isn't a 
>> problem either.
>> 
> 
> Well you are adding SEEN_BIG_PROG in following patch so it would still 
> be contiguous at the end.
> 
> I don't really mind but I thought it would be less churn to just leave 
> SEEN_TAILCALL as is and re-use 0x40000000 for SEEN_BIG_PROG.

Ah ok. This patch was from a different series and it made more sense to 
change the bit number there. I have reused the patch here as-is since 
the change is fairly trivial.


- Naveen


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT
  2021-10-05  5:46       ` Christophe Leroy
@ 2022-01-07 11:46         ` Naveen N. Rao
  0 siblings, 0 replies; 40+ messages in thread
From: Naveen N. Rao @ 2022-01-07 11:46 UTC (permalink / raw)
  To: Alexei Starovoitov, Christophe Leroy, Daniel Borkmann,
	Johan Almbladh, Michael Ellerman, Nicholas Piggin
  Cc: bpf, linuxppc-dev

Christophe Leroy wrote:
> 
> 
> Le 04/10/2021 à 20:24, Naveen N. Rao a écrit :
>> Christophe Leroy wrote:
>>>
>>>
>>> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>>>> In some scenarios, it is possible that the program epilogue is outside
>>>> the branch range for a BPF_EXIT instruction. Instead of rejecting such
>>>> programs, emit an indirect branch. We track the size of the bpf program
>>>> emitted after the initial run and do a second pass since BPF_EXIT can
>>>> end up emitting different number of instructions depending on the
>>>> program size.
>>>>
>>>> Suggested-by: Jordan Niethe <jniethe5@gmail.com>
>>>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>>>> ---
>>>>   arch/powerpc/net/bpf_jit.h        |  3 +++
>>>>   arch/powerpc/net/bpf_jit_comp.c   | 22 +++++++++++++++++++++-
>>>>   arch/powerpc/net/bpf_jit_comp32.c |  2 +-
>>>>   arch/powerpc/net/bpf_jit_comp64.c |  2 +-
>>>>   4 files changed, 26 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
>>>> index 89bd744c2bffd4..4023de1698b9f5 100644
>>>> --- a/arch/powerpc/net/bpf_jit.h
>>>> +++ b/arch/powerpc/net/bpf_jit.h
>>>> @@ -126,6 +126,7 @@
>>>>   #define SEEN_FUNC    0x20000000 /* might call external helpers */
>>>>   #define SEEN_TAILCALL    0x40000000 /* uses tail calls */
>>>> +#define SEEN_BIG_PROG    0x80000000 /* large prog, >32MB */
>>>>   #define SEEN_VREG_MASK    0x1ff80000 /* Volatile registers r3-r12 */
>>>>   #define SEEN_NVREG_MASK    0x0003ffff /* Non volatile registers 
>>>> r14-r31 */
>>>> @@ -179,6 +180,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
>>>> *image, struct codegen_context *
>>>>   void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
>>>>   void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
>>>>   void bpf_jit_realloc_regs(struct codegen_context *ctx);
>>>> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
>>>> +                    int tmp_reg, unsigned long exit_addr);
>>>>   #endif
>>>> diff --git a/arch/powerpc/net/bpf_jit_comp.c 
>>>> b/arch/powerpc/net/bpf_jit_comp.c
>>>> index fcbf7a917c566e..3204872fbf2738 100644
>>>> --- a/arch/powerpc/net/bpf_jit_comp.c
>>>> +++ b/arch/powerpc/net/bpf_jit_comp.c
>>>> @@ -72,6 +72,21 @@ static int bpf_jit_fixup_subprog_calls(struct 
>>>> bpf_prog *fp, u32 *image,
>>>>       return 0;
>>>>   }
>>>> +int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx,
>>>> +                    int tmp_reg, unsigned long exit_addr)
>>>> +{
>>>> +    if (!(ctx->seen & SEEN_BIG_PROG) && 
>>>> is_offset_in_branch_range(exit_addr)) {
>>>> +        PPC_JMP(exit_addr);
>>>> +    } else {
>>>> +        ctx->seen |= SEEN_BIG_PROG;
>>>> +        PPC_FUNC_ADDR(tmp_reg, (unsigned long)image + exit_addr);
>>>> +        EMIT(PPC_RAW_MTCTR(tmp_reg));
>>>> +        EMIT(PPC_RAW_BCTR());
>>>> +    }
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>>   struct powerpc64_jit_data {
>>>>       struct bpf_binary_header *header;
>>>>       u32 *addrs;
>>>> @@ -155,12 +170,17 @@ struct bpf_prog *bpf_int_jit_compile(struct 
>>>> bpf_prog *fp)
>>>>           goto out_addrs;
>>>>       }
>>>> +    if (!is_offset_in_branch_range((long)cgctx.idx * 4))
>>>> +        cgctx.seen |= SEEN_BIG_PROG;
>>>> +
>>>>       /*
>>>>        * If we have seen a tail call, we need a second pass.
>>>>        * This is because bpf_jit_emit_common_epilogue() is called
>>>>        * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
>>>> +     * We also need a second pass if we ended up with too large
>>>> +     * a program so as to fix branches.
>>>>        */
>>>> -    if (cgctx.seen & SEEN_TAILCALL) {
>>>> +    if (cgctx.seen & (SEEN_TAILCALL | SEEN_BIG_PROG)) {
>>>>           cgctx.idx = 0;
>>>>           if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
>>>>               fp = org_fp;
>>>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c 
>>>> b/arch/powerpc/net/bpf_jit_comp32.c
>>>> index a74d52204f8da2..d2a67574a23066 100644
>>>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>>>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>>>> @@ -852,7 +852,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
>>>> *image, struct codegen_context *
>>>>                * we'll just fall through to the epilogue.
>>>>                */
>>>>               if (i != flen - 1)
>>>> -                PPC_JMP(exit_addr);
>>>> +                bpf_jit_emit_exit_insn(image, ctx, tmp_reg, exit_addr);
>>>
>>> On ppc32, if you use tmp_reg you must flag it. But I think you could 
>>> use r0 instead.
>> 
>> Indeed. Can we drop tracking of the temp registers and using them while
>> remapping registers? Are you seeing significant benefits with re-use of 
>> those temp registers?
>> 
> 
> I'm not sure to follow you.
> 
> On ppc32, all volatile registers are used for function arguments, so 
> temp registers are necessarily non-volatile so we track them as all 
> non-volatile registers we use.
> 
> I think saving on stack only the non-volatile registers we use provides 
> real benefit, otherwise you wouldn't have implemented it would you ?

You're right. I was wary of having to track temporary register usage, 
which is a bit harder and prone to mistakes like the above. A related 
concern was that the register remapping is only used if there are no 
helper calls, which looks like a big limitation.

But, I do agree that it is worth the trouble for ppc32 given the 
register usage.

> 
> Anyway here you should use _R0 instead of tmp_reg.


Thanks,
Naveen

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2022-01-07 11:46 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-01 21:14 [PATCH 0/9] powerpc/bpf: Various fixes Naveen N. Rao
2021-10-01 21:14 ` [PATCH 1/9] powerpc/lib: Add helper to check if offset is within conditional branch range Naveen N. Rao
2021-10-01 21:37   ` Song Liu
2021-10-04 18:02     ` Naveen N. Rao
2021-10-03  7:50   ` Christophe Leroy
2021-10-04 18:03     ` Naveen N. Rao
2021-10-01 21:14 ` [PATCH 2/9] powerpc/bpf: Validate branch ranges Naveen N. Rao
2021-10-01 21:45   ` Song Liu
2021-10-02 17:29   ` Johan Almbladh
2021-10-03  7:54   ` Christophe Leroy
2021-10-04 18:11     ` Naveen N. Rao
2021-10-01 21:14 ` [PATCH 3/9] powerpc/bpf: Remove unused SEEN_STACK Naveen N. Rao
2021-10-01 21:47   ` Song Liu
2021-10-02 17:30   ` Johan Almbladh
2021-10-03  7:55   ` Christophe Leroy
2021-10-04 18:11     ` Naveen N. Rao
2021-10-05  5:50       ` Christophe Leroy
2021-10-05 20:22         ` Naveen N. Rao
2021-10-01 21:14 ` [PATCH 4/9] powerpc/bpf: Handle large branch ranges with BPF_EXIT Naveen N. Rao
2021-10-01 21:53   ` Song Liu
2021-10-02 17:31   ` Johan Almbladh
2021-10-03  7:59   ` Christophe Leroy
2021-10-04 18:24     ` Naveen N. Rao
2021-10-05  5:46       ` Christophe Leroy
2022-01-07 11:46         ` Naveen N. Rao
2021-10-01 21:14 ` [PATCH 5/9] powerpc/bpf: Fix BPF_MOD when imm == 1 Naveen N. Rao
2021-10-01 21:55   ` Song Liu
2021-10-02 17:32   ` Johan Almbladh
2021-10-01 21:14 ` [PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000 Naveen N. Rao
2021-10-01 22:01   ` Song Liu
2021-10-02 17:33   ` Johan Almbladh
2021-10-03  8:07   ` Christophe Leroy
2021-10-04 18:18     ` Naveen N. Rao
2021-10-05  5:40       ` Christophe Leroy
2021-10-01 21:14 ` [PATCH 7/9] powerpc/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06 Naveen N. Rao
2021-10-02 17:35   ` Johan Almbladh
2021-10-01 21:14 ` [PATCH 8/9] powerpc/security: Add a helper to query stf_barrier type Naveen N. Rao
2021-10-01 21:14 ` [PATCH 9/9] powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC Naveen N. Rao
2021-10-02 17:41 ` [PATCH 0/9] powerpc/bpf: Various fixes Johan Almbladh
2021-10-04 18:19   ` Naveen N. Rao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).