linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf] bpf: powerpc64: optimize JIT passes for bpf function calls
@ 2018-12-03 12:21 Sandipan Das
  2018-12-03 12:48 ` Daniel Borkmann
  0 siblings, 1 reply; 4+ messages in thread
From: Sandipan Das @ 2018-12-03 12:21 UTC (permalink / raw)
  To: daniel, ast; +Cc: naveen.n.rao, linuxppc-dev, netdev

Once the JITed images for each function in a multi-function program
are generated after the first three JIT passes, we only need to fix
the target address for the branch instruction corresponding to each
bpf-to-bpf function call.

This introduces the following optimizations for reducing the work
done by the JIT compiler when handling multi-function programs:

  [1] Instead of doing two extra passes to fix the bpf function calls,
      do just one as that would be sufficient.

  [2] During the extra pass, only overwrite the instruction sequences
      for the bpf-to-bpf function calls as everything else would still
      remain exactly the same. This also reduces the number of writes
      to the JITed image.

  [3] Do not regenerate the prologue and the epilogue during the extra
      pass as that would be redundant.

Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp64.c | 66 +++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 17482f5de3e2..9393e231cbc2 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -891,6 +891,55 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
 	return 0;
 }
 
+/* Fix the branch target addresses for subprog calls */
+static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image,
+				       struct codegen_context *ctx, u32 *addrs)
+{
+	const struct bpf_insn *insn = fp->insnsi;
+	bool func_addr_fixed;
+	u64 func_addr;
+	u32 tmp_idx;
+	int i, ret;
+
+	for (i = 0; i < fp->len; i++) {
+		/*
+		 * During the extra pass, only the branch target addresses for
+		 * the subprog calls need to be fixed. All other instructions
+		 * can left untouched.
+		 *
+		 * The JITed image length does not change because we already
+		 * ensure that the JITed instruction sequence for these calls
+		 * are of fixed length by padding them with NOPs.
+		 */
+		if (insn[i].code == (BPF_JMP | BPF_CALL) &&
+		    insn[i].src_reg == BPF_PSEUDO_CALL) {
+			ret = bpf_jit_get_func_addr(fp, &insn[i], true,
+						    &func_addr,
+						    &func_addr_fixed);
+			if (ret < 0)
+				return ret;
+
+			/*
+			 * Save ctx->idx as this would currently point to the
+			 * end of the JITed image and set it to the offset of
+			 * the instruction sequence corresponding to the
+			 * subprog call temporarily.
+			 */
+			tmp_idx = ctx->idx;
+			ctx->idx = addrs[i] / 4;
+			bpf_jit_emit_func_call_rel(image, ctx, func_addr);
+
+			/*
+			 * Restore ctx->idx here. This is safe as the length
+			 * of the JITed sequence remains unchanged.
+			 */
+			ctx->idx = tmp_idx;
+		}
+	}
+
+	return 0;
+}
+
 struct powerpc64_jit_data {
 	struct bpf_binary_header *header;
 	u32 *addrs;
@@ -989,6 +1038,22 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 skip_init_ctx:
 	code_base = (u32 *)(image + FUNCTION_DESCR_SIZE);
 
+	if (extra_pass) {
+		/*
+		 * Do not touch the prologue and epilogue as they will remain
+		 * unchanged. Only fix the branch target address for subprog
+		 * calls in the body.
+		 *
+		 * This does not change the offsets and lengths of the subprog
+		 * call instruction sequences and hence, the size of the JITed
+		 * image as well.
+		 */
+		bpf_jit_fixup_subprog_calls(fp, code_base, &cgctx, addrs);
+
+		/* There is no need to perform the usual passes. */
+		goto skip_codegen_passes;
+	}
+
 	/* Code generation passes 1-2 */
 	for (pass = 1; pass < 3; pass++) {
 		/* Now build the prologue, body code & epilogue for real. */
@@ -1002,6 +1067,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 				proglen - (cgctx.idx * 4), cgctx.seen);
 	}
 
+skip_codegen_passes:
 	if (bpf_jit_enable > 1)
 		/*
 		 * Note that we output the base address of the code_base
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf: powerpc64: optimize JIT passes for bpf function calls
  2018-12-03 12:21 [PATCH bpf] bpf: powerpc64: optimize JIT passes for bpf function calls Sandipan Das
@ 2018-12-03 12:48 ` Daniel Borkmann
  2018-12-03 13:26   ` Sandipan Das
  0 siblings, 1 reply; 4+ messages in thread
From: Daniel Borkmann @ 2018-12-03 12:48 UTC (permalink / raw)
  To: Sandipan Das, ast; +Cc: naveen.n.rao, linuxppc-dev, netdev

Hi Sandipan,

On 12/03/2018 01:21 PM, Sandipan Das wrote:
> Once the JITed images for each function in a multi-function program
> are generated after the first three JIT passes, we only need to fix
> the target address for the branch instruction corresponding to each
> bpf-to-bpf function call.
> 
> This introduces the following optimizations for reducing the work
> done by the JIT compiler when handling multi-function programs:
> 
>   [1] Instead of doing two extra passes to fix the bpf function calls,
>       do just one as that would be sufficient.
> 
>   [2] During the extra pass, only overwrite the instruction sequences
>       for the bpf-to-bpf function calls as everything else would still
>       remain exactly the same. This also reduces the number of writes
>       to the JITed image.
> 
>   [3] Do not regenerate the prologue and the epilogue during the extra
>       pass as that would be redundant.
> 
> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>

Thanks for the patch, just to clarify, it's targeted at bpf-next and
not bpf, correct?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf: powerpc64: optimize JIT passes for bpf function calls
  2018-12-03 12:48 ` Daniel Borkmann
@ 2018-12-03 13:26   ` Sandipan Das
  2018-12-03 22:53     ` Daniel Borkmann
  0 siblings, 1 reply; 4+ messages in thread
From: Sandipan Das @ 2018-12-03 13:26 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: naveen.n.rao, linuxppc-dev, ast, netdev

Hi Daniel,

On 03/12/18 6:18 PM, Daniel Borkmann wrote:
> 
> Thanks for the patch, just to clarify, it's targeted at bpf-next and
> not bpf, correct?
> 

This patch is targeted at the bpf tree.

This depends on commit e2c95a61656d ("bpf, ppc64: generalize fetching
subprog into bpf_jit_get_func_addr") which is already available in the
bpf tree.

Thanks,
Sandipan


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf: powerpc64: optimize JIT passes for bpf function calls
  2018-12-03 13:26   ` Sandipan Das
@ 2018-12-03 22:53     ` Daniel Borkmann
  0 siblings, 0 replies; 4+ messages in thread
From: Daniel Borkmann @ 2018-12-03 22:53 UTC (permalink / raw)
  To: Sandipan Das; +Cc: naveen.n.rao, linuxppc-dev, ast, netdev

On 12/03/2018 02:26 PM, Sandipan Das wrote:
> Hi Daniel,
> 
> On 03/12/18 6:18 PM, Daniel Borkmann wrote:
>>
>> Thanks for the patch, just to clarify, it's targeted at bpf-next and
>> not bpf, correct?
> 
> This patch is targeted at the bpf tree.

Ok, thanks for clarifying, applied to bpf!

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-12-03 23:01 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-03 12:21 [PATCH bpf] bpf: powerpc64: optimize JIT passes for bpf function calls Sandipan Das
2018-12-03 12:48 ` Daniel Borkmann
2018-12-03 13:26   ` Sandipan Das
2018-12-03 22:53     ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).