bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/2] bpf, arm64: Support per-cpu instructions
@ 2024-04-24 17:35 Puranjay Mohan
  2024-04-24 17:35 ` [PATCH bpf-next v2 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs Puranjay Mohan
  2024-04-24 17:35 ` [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper Puranjay Mohan
  0 siblings, 2 replies; 9+ messages in thread
From: Puranjay Mohan @ 2024-04-24 17:35 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf
  Cc: puranjay12

Changes in v1 -> v2:
v1: https://lore.kernel.org/all/20240405091707.66675-1-puranjay12@gmail.com/
- Add a patch to inline bpf_get_smp_processor_id()
- Fix an issue in MRS instruction encoding as pointed out by Will
- Remove CONFIG_SMP check

This series adds the support of internal only per-CPU instructions and
inlines the bpf_get_smp_processor_id() helper for ARM64 BPF JIT.

Here is an example of bpf_get_smp_processor_id() and percpu_array_map_lookup_elem()
before and after this series.

                                         BPF
                                        =====
              BEFORE                                       AFTER
             --------                                     -------

int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
(85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
                                                (bf) r0 = r0
                                                (61) r0 = *(u32 *)(r0 +0)


p = bpf_map_lookup_elem(map, &zero);            p = bpf_map_lookup_elem(map, &zero);
(18) r1 = map[id:78]                            (18) r1 = map[id:153]
(18) r2 = map[id:82][0]+65536                   (18) r2 = map[id:157][0]+65536
(85) call percpu_array_map_lookup_elem#313512   (07) r1 += 496
                                                (61) r0 = *(u32 *)(r2 +0)
                                                (35) if r0 >= 0x1 goto pc+5
                                                (67) r0 <<= 3
                                                (0f) r0 += r1
                                                (79) r0 = *(u64 *)(r0 +0)
                                                (bf) r0 = r0
                                                (05) goto pc+1
                                                (b7) r0 = 0


                                      ARM64 JIT
                                     ===========

              BEFORE                                       AFTER
             --------                                     -------

int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
blr     x10                                mrs     x10, tpidr_el1
add     x7, x0, #0x0                       add     x7, x7, x10
                                           ldr     w7, [x7]


p = bpf_map_lookup_elem(map, &zero);       p = bpf_map_lookup_elem(map, &zero);
mov     x0, #0xffff0003ffffffff            mov     x0, #0xffff0003ffffffff
movk    x0, #0xce5c, lsl #16               movk    x0, #0xe0f3, lsl #16
movk    x0, #0xca00                        movk    x0, #0x7c00
mov     x1, #0xffff8000ffffffff            mov     x1, #0xffff8000ffffffff
movk    x1, #0x8bdb, lsl #16               movk    x1, #0xb0c7, lsl #16
movk    x1, #0x6000                        movk    x1, #0xe000
mov     x10, #0xffffffffffff3ed0           add     x0, x0, #0x1f0
movk    x10, #0x802d, lsl #16              ldr     w7, [x1]
movk    x10, #0x8000, lsl #32              cmp     x7, #0x1
blr     x10                                b.cs    0x0000000000000090
add     x7, x0, #0x0                       lsl     x7, x7, #3
                                           add     x7, x7, x0
                                           ldr     x7, [x7]
                                           mrs     x10, tpidr_el1
                                           add     x7, x7, x10
                                           b       0x0000000000000094
                                           mov     x7, #0x0

              Performance improvement found using benchmark[1]

             BEFORE                                       AFTER
            --------                                     -------

glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Puranjay Mohan (2):
  arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs
  bpf, arm64: inline bpf_get_smp_processor_id() helper

 arch/arm64/include/asm/insn.h |  7 +++++++
 arch/arm64/lib/insn.c         | 11 +++++++++++
 arch/arm64/net/bpf_jit.h      |  6 ++++++
 arch/arm64/net/bpf_jit_comp.c | 14 ++++++++++++++
 kernel/bpf/verifier.c         | 11 ++++++++++-
 5 files changed, 48 insertions(+), 1 deletion(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs
  2024-04-24 17:35 [PATCH bpf-next v2 0/2] bpf, arm64: Support per-cpu instructions Puranjay Mohan
@ 2024-04-24 17:35 ` Puranjay Mohan
  2024-04-24 17:35 ` [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper Puranjay Mohan
  1 sibling, 0 replies; 9+ messages in thread
From: Puranjay Mohan @ 2024-04-24 17:35 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf
  Cc: puranjay12

From: Puranjay Mohan <puranjay12@gmail.com>

Support an instruction for resolving absolute addresses of per-CPU
data from their per-CPU offsets. This instruction is internal-only and
users are not allowed to use them directly. They will only be used for
internal inlining optimizations for now between BPF verifier and BPF
JITs.

Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
access using tpidr_el1"), the per-cpu offset for the CPU is stored in
the tpidr_el1/2 register of that CPU.

To support this BPF instruction in the ARM64 JIT, the following ARM64
instructions are emitted:

mov dst, src		// Move src to dst, if src != dst
mrs tmp, tpidr_el1/2	// Move per-cpu offset of the current cpu in tmp.
add dst, dst, tmp	// Add the per cpu offset to the dst.

To measure the performance improvement provided by this change, the
benchmark in [1] was used:

Before:
glob-arr-inc   :   23.597 ± 0.012M/s
arr-inc        :   23.173 ± 0.019M/s
hash-inc       :   12.186 ± 0.028M/s

After:
glob-arr-inc   :   23.819 ± 0.034M/s
arr-inc        :   23.285 ± 0.017M/s
hash-inc       :   12.419 ± 0.011M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
---
 arch/arm64/include/asm/insn.h |  7 +++++++
 arch/arm64/lib/insn.c         | 11 +++++++++++
 arch/arm64/net/bpf_jit.h      |  6 ++++++
 arch/arm64/net/bpf_jit_comp.c | 14 ++++++++++++++
 4 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index db1aeacd4cd9..8de0e39b29f3 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -135,6 +135,11 @@ enum aarch64_insn_special_register {
 	AARCH64_INSN_SPCLREG_SP_EL2	= 0xF210
 };
 
+enum aarch64_insn_system_register {
+	AARCH64_INSN_SYSREG_TPIDR_EL1	= 0x4684,
+	AARCH64_INSN_SYSREG_TPIDR_EL2	= 0x6682,
+};
+
 enum aarch64_insn_variant {
 	AARCH64_INSN_VARIANT_32BIT,
 	AARCH64_INSN_VARIANT_64BIT
@@ -686,6 +691,8 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
 }
 #endif
 u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
+u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+			 enum aarch64_insn_system_register sysreg);
 
 s32 aarch64_get_branch_offset(u32 insn);
 u32 aarch64_set_branch_offset(u32 insn, s32 offset);
diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c
index a635ab83fee3..b008a9b46a7f 100644
--- a/arch/arm64/lib/insn.c
+++ b/arch/arm64/lib/insn.c
@@ -1515,3 +1515,14 @@ u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
 
 	return insn;
 }
+
+u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+			 enum aarch64_insn_system_register sysreg)
+{
+	u32 insn = aarch64_insn_get_mrs_value();
+
+	insn &= ~GENMASK(19, 0);
+	insn |= sysreg << 5;
+	return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT,
+					    insn, result);
+}
diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
index 23b1b34db088..b627ef7188c7 100644
--- a/arch/arm64/net/bpf_jit.h
+++ b/arch/arm64/net/bpf_jit.h
@@ -297,4 +297,10 @@
 #define A64_ADR(Rd, offset) \
 	aarch64_insn_gen_adr(0, offset, Rd, AARCH64_INSN_ADR_TYPE_ADR)
 
+/* MRS */
+#define A64_MRS_TPIDR_EL1(Rt) \
+	aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_TPIDR_EL1)
+#define A64_MRS_TPIDR_EL2(Rt) \
+	aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_TPIDR_EL2)
+
 #endif /* _BPF_JIT_H */
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 76b91f36c729..ed8f9716d9d5 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -877,6 +877,15 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_ORR(1, tmp, dst, tmp), ctx);
 			emit(A64_MOV(1, dst, tmp), ctx);
 			break;
+		} else if (insn_is_mov_percpu_addr(insn)) {
+			if (dst != src)
+				emit(A64_MOV(1, dst, src), ctx);
+			if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN))
+				emit(A64_MRS_TPIDR_EL2(tmp), ctx);
+			else
+				emit(A64_MRS_TPIDR_EL1(tmp), ctx);
+			emit(A64_ADD(1, dst, dst, tmp), ctx);
+			break;
 		}
 		switch (insn->off) {
 		case 0:
@@ -2527,6 +2536,11 @@ bool bpf_jit_supports_arena(void)
 	return true;
 }
 
+bool bpf_jit_supports_percpu_insn(void)
+{
+	return true;
+}
+
 void bpf_jit_free(struct bpf_prog *prog)
 {
 	if (prog->jited) {
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-24 17:35 [PATCH bpf-next v2 0/2] bpf, arm64: Support per-cpu instructions Puranjay Mohan
  2024-04-24 17:35 ` [PATCH bpf-next v2 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs Puranjay Mohan
@ 2024-04-24 17:35 ` Puranjay Mohan
  2024-04-24 22:01   ` Andrii Nakryiko
  1 sibling, 1 reply; 9+ messages in thread
From: Puranjay Mohan @ 2024-04-24 17:35 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf
  Cc: puranjay12

As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
bpf_get_smp_processor_id().

ARM64 uses the per-cpu variable cpu_number to store the cpu id.

Here is how the BPF and ARM64 JITed assembly changes after this commit:

                                         BPF
         		                =====
              BEFORE                                       AFTER
             --------                                     -------

int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
(85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
                                                (bf) r0 = r0
                                                (61) r0 = *(u32 *)(r0 +0)

				      ARM64 JIT
				     ===========

              BEFORE                                       AFTER
             --------                                     -------

int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
blr     x10                                mrs     x10, tpidr_el1
add     x7, x0, #0x0                       add     x7, x7, x10
                                           ldr     w7, [x7]

Performance improvement using benchmark[1]

             BEFORE                                       AFTER
            --------                                     -------

glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
 kernel/bpf/verifier.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9715c88cc025..3373be261889 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			goto next_insn;
 		}
 
-#ifdef CONFIG_X86_64
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
 		/* Implement bpf_get_smp_processor_id() inline. */
 		if (insn->imm == BPF_FUNC_get_smp_processor_id &&
 		    prog->jit_requested && bpf_jit_supports_percpu_insn()) {
@@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			 * changed in some incompatible and hard to support
 			 * way, it's fine to back out this inlining logic
 			 */
+#if defined(CONFIG_X86_64)
 			insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
 			insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
 			insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
 			cnt = 3;
+#elif defined(CONFIG_ARM64)
+			struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
 
+			insn_buf[0] = cpu_number_addr[0];
+			insn_buf[1] = cpu_number_addr[1];
+			insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+			insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
+			cnt = 4;
+#endif
 			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
 			if (!new_prog)
 				return -ENOMEM;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-24 17:35 ` [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper Puranjay Mohan
@ 2024-04-24 22:01   ` Andrii Nakryiko
  2024-04-25 10:14     ` Puranjay Mohan
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2024-04-24 22:01 UTC (permalink / raw)
  To: Puranjay Mohan
  Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf,
	puranjay12

On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>
> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
> bpf_get_smp_processor_id().
>
> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
>
> Here is how the BPF and ARM64 JITed assembly changes after this commit:
>
>                                          BPF
>                                         =====
>               BEFORE                                       AFTER
>              --------                                     -------
>
> int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
> (85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
>                                                 (bf) r0 = r0

nit: hmm, you are probably using a bit outdated bpftool, it should be
emitted as:

(bf) r0 = &(void __percpu *)(r0)

>                                                 (61) r0 = *(u32 *)(r0 +0)
>
>                                       ARM64 JIT
>                                      ===========
>
>               BEFORE                                       AFTER
>              --------                                     -------
>
> int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
> mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
> movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
> movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
> blr     x10                                mrs     x10, tpidr_el1
> add     x7, x0, #0x0                       add     x7, x7, x10
>                                            ldr     w7, [x7]
>
> Performance improvement using benchmark[1]
>
>              BEFORE                                       AFTER
>             --------                                     -------
>
> glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
> arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
> hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s
>
> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>
> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> ---
>  kernel/bpf/verifier.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>

Besides the nits, lgtm.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 9715c88cc025..3373be261889 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>                         goto next_insn;
>                 }
>
> -#ifdef CONFIG_X86_64
> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)

I think you can drop this, we are protected by
bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif
checks?

>                 /* Implement bpf_get_smp_processor_id() inline. */
>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>                          * changed in some incompatible and hard to support
>                          * way, it's fine to back out this inlining logic
>                          */
> +#if defined(CONFIG_X86_64)
>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>                         cnt = 3;
> +#elif defined(CONFIG_ARM64)
> +                       struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
>

this &cpu_number offset is not guaranteed to be within 4GB on arm64?

> +                       insn_buf[0] = cpu_number_addr[0];
> +                       insn_buf[1] = cpu_number_addr[1];
> +                       insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> +                       insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> +                       cnt = 4;
> +#endif
>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
>                         if (!new_prog)
>                                 return -ENOMEM;
> --
> 2.40.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-24 22:01   ` Andrii Nakryiko
@ 2024-04-25 10:14     ` Puranjay Mohan
  2024-04-25 18:09       ` Andrii Nakryiko
  0 siblings, 1 reply; 9+ messages in thread
From: Puranjay Mohan @ 2024-04-25 10:14 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf

Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>>
>> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
>> bpf_get_smp_processor_id().
>>
>> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
>>
>> Here is how the BPF and ARM64 JITed assembly changes after this commit:
>>
>>                                          BPF
>>                                         =====
>>               BEFORE                                       AFTER
>>              --------                                     -------
>>
>> int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
>> (85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
>>                                                 (bf) r0 = r0
>
> nit: hmm, you are probably using a bit outdated bpftool, it should be
> emitted as:
>
> (bf) r0 = &(void __percpu *)(r0)

Yes, I was using the bpftool shipped with the distro. I tried it again
with the latest bpftool and it emitted this as expected.

>
>>                                                 (61) r0 = *(u32 *)(r0 +0)
>>
>>                                       ARM64 JIT
>>                                      ===========
>>
>>               BEFORE                                       AFTER
>>              --------                                     -------
>>
>> int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
>> mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
>> movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
>> movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
>> blr     x10                                mrs     x10, tpidr_el1
>> add     x7, x0, #0x0                       add     x7, x7, x10
>>                                            ldr     w7, [x7]
>>
>> Performance improvement using benchmark[1]
>>
>>              BEFORE                                       AFTER
>>             --------                                     -------
>>
>> glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
>> arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
>> hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s
>>
>> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>>
>> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
>> ---
>>  kernel/bpf/verifier.c | 11 ++++++++++-
>>  1 file changed, 10 insertions(+), 1 deletion(-)
>>
>
> Besides the nits, lgtm.
>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 9715c88cc025..3373be261889 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>>                         goto next_insn;
>>                 }
>>
>> -#ifdef CONFIG_X86_64
>> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
>
> I think you can drop this, we are protected by
> bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif
> checks?

If I remove this and later add support of percpu_insn on RISCV without
inlining bpf_get_smp_processor_id() then it will cause problems here
right? because then the last 5-6 lines inside this if(){} will be
executed for RISCV.

>
>>                 /* Implement bpf_get_smp_processor_id() inline. */
>>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
>> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>>                          * changed in some incompatible and hard to support
>>                          * way, it's fine to back out this inlining logic
>>                          */
>> +#if defined(CONFIG_X86_64)
>>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>>                         cnt = 3;
>> +#elif defined(CONFIG_ARM64)
>> +                       struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
>>
>
> this &cpu_number offset is not guaranteed to be within 4GB on arm64?

Unfortunately, the per-cpu section is not placed in the first 4GB and
therefore the per-cpu pointers are not 32-bit on ARM64.

>
>> +                       insn_buf[0] = cpu_number_addr[0];
>> +                       insn_buf[1] = cpu_number_addr[1];
>> +                       insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>> +                       insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>> +                       cnt = 4;
>> +#endif
>>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
>>                         if (!new_prog)
>>                                 return -ENOMEM;
>> --
>> 2.40.1
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-25 10:14     ` Puranjay Mohan
@ 2024-04-25 18:09       ` Andrii Nakryiko
  2024-04-25 18:55         ` Puranjay Mohan
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2024-04-25 18:09 UTC (permalink / raw)
  To: Puranjay Mohan
  Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf

On Thu, Apr 25, 2024 at 3:14 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>
> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>
> > On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@kernel.org> wrote:
> >>
> >> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
> >> bpf_get_smp_processor_id().
> >>
> >> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
> >>
> >> Here is how the BPF and ARM64 JITed assembly changes after this commit:
> >>
> >>                                          BPF
> >>                                         =====
> >>               BEFORE                                       AFTER
> >>              --------                                     -------
> >>
> >> int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
> >> (85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
> >>                                                 (bf) r0 = r0
> >
> > nit: hmm, you are probably using a bit outdated bpftool, it should be
> > emitted as:
> >
> > (bf) r0 = &(void __percpu *)(r0)
>
> Yes, I was using the bpftool shipped with the distro. I tried it again
> with the latest bpftool and it emitted this as expected.

Cool, would be nice to update the commit message with the right syntax
for next revision, thanks!

>
> >
> >>                                                 (61) r0 = *(u32 *)(r0 +0)
> >>
> >>                                       ARM64 JIT
> >>                                      ===========
> >>
> >>               BEFORE                                       AFTER
> >>              --------                                     -------
> >>
> >> int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
> >> mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
> >> movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
> >> movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
> >> blr     x10                                mrs     x10, tpidr_el1
> >> add     x7, x0, #0x0                       add     x7, x7, x10
> >>                                            ldr     w7, [x7]
> >>
> >> Performance improvement using benchmark[1]
> >>
> >>              BEFORE                                       AFTER
> >>             --------                                     -------
> >>
> >> glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
> >> arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
> >> hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s
> >>
> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
> >>
> >> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> >> ---
> >>  kernel/bpf/verifier.c | 11 ++++++++++-
> >>  1 file changed, 10 insertions(+), 1 deletion(-)
> >>
> >
> > Besides the nits, lgtm.
> >
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> >
> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> index 9715c88cc025..3373be261889 100644
> >> --- a/kernel/bpf/verifier.c
> >> +++ b/kernel/bpf/verifier.c
> >> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >>                         goto next_insn;
> >>                 }
> >>
> >> -#ifdef CONFIG_X86_64
> >> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
> >
> > I think you can drop this, we are protected by
> > bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif
> > checks?
>
> If I remove this and later add support of percpu_insn on RISCV without
> inlining bpf_get_smp_processor_id() then it will cause problems here
> right? because then the last 5-6 lines inside this if(){} will be
> executed for RISCV.

Just add

#else
return -EFAULT;
#endif

?

I'm trying to avoid this duplication of the defined(CONFIG_xxx) checks
for supported architectures.

>
> >
> >>                 /* Implement bpf_get_smp_processor_id() inline. */
> >>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
> >>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
> >> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >>                          * changed in some incompatible and hard to support
> >>                          * way, it's fine to back out this inlining logic
> >>                          */
> >> +#if defined(CONFIG_X86_64)
> >>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
> >>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> >>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> >>                         cnt = 3;
> >> +#elif defined(CONFIG_ARM64)
> >> +                       struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
> >>
> >
> > this &cpu_number offset is not guaranteed to be within 4GB on arm64?
>
> Unfortunately, the per-cpu section is not placed in the first 4GB and
> therefore the per-cpu pointers are not 32-bit on ARM64.

I see. It might make sense to turn x86-64 code into using MOV64_IMM as
well to keep more of the logic common. Then it will be just the
difference of an offset that's loaded. Give it a try?

>
> >
> >> +                       insn_buf[0] = cpu_number_addr[0];
> >> +                       insn_buf[1] = cpu_number_addr[1];
> >> +                       insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> >> +                       insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> >> +                       cnt = 4;
> >> +#endif
> >>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> >>                         if (!new_prog)
> >>                                 return -ENOMEM;
> >> --
> >> 2.40.1
> >>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-25 18:09       ` Andrii Nakryiko
@ 2024-04-25 18:55         ` Puranjay Mohan
  2024-04-25 20:43           ` Andrii Nakryiko
  0 siblings, 1 reply; 9+ messages in thread
From: Puranjay Mohan @ 2024-04-25 18:55 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf

Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> On Thu, Apr 25, 2024 at 3:14 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>>
>> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>>
>> > On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>> >>
>> >> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
>> >> bpf_get_smp_processor_id().
>> >>
>> >> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
>> >>
>> >> Here is how the BPF and ARM64 JITed assembly changes after this commit:
>> >>
>> >>                                          BPF
>> >>                                         =====
>> >>               BEFORE                                       AFTER
>> >>              --------                                     -------
>> >>
>> >> int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
>> >> (85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
>> >>                                                 (bf) r0 = r0
>> >
>> > nit: hmm, you are probably using a bit outdated bpftool, it should be
>> > emitted as:
>> >
>> > (bf) r0 = &(void __percpu *)(r0)
>>
>> Yes, I was using the bpftool shipped with the distro. I tried it again
>> with the latest bpftool and it emitted this as expected.
>
> Cool, would be nice to update the commit message with the right syntax
> for next revision, thanks!
>

Sure, will do.

>>
>> >
>> >>                                                 (61) r0 = *(u32 *)(r0 +0)
>> >>
>> >>                                       ARM64 JIT
>> >>                                      ===========
>> >>
>> >>               BEFORE                                       AFTER
>> >>              --------                                     -------
>> >>
>> >> int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
>> >> mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
>> >> movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
>> >> movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
>> >> blr     x10                                mrs     x10, tpidr_el1
>> >> add     x7, x0, #0x0                       add     x7, x7, x10
>> >>                                            ldr     w7, [x7]
>> >>
>> >> Performance improvement using benchmark[1]
>> >>
>> >>              BEFORE                                       AFTER
>> >>             --------                                     -------
>> >>
>> >> glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
>> >> arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
>> >> hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s
>> >>
>> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>> >>
>> >> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
>> >> ---
>> >>  kernel/bpf/verifier.c | 11 ++++++++++-
>> >>  1 file changed, 10 insertions(+), 1 deletion(-)
>> >>
>> >
>> > Besides the nits, lgtm.
>> >
>> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
>> >
>> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> >> index 9715c88cc025..3373be261889 100644
>> >> --- a/kernel/bpf/verifier.c
>> >> +++ b/kernel/bpf/verifier.c
>> >> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>> >>                         goto next_insn;
>> >>                 }
>> >>
>> >> -#ifdef CONFIG_X86_64
>> >> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
>> >
>> > I think you can drop this, we are protected by
>> > bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif
>> > checks?
>>
>> If I remove this and later add support of percpu_insn on RISCV without
>> inlining bpf_get_smp_processor_id() then it will cause problems here
>> right? because then the last 5-6 lines inside this if(){} will be
>> executed for RISCV.
>
> Just add
>
> #else
> return -EFAULT;

I don't think we can return.

> #endif
>
> ?
>
> I'm trying to avoid this duplication of the defined(CONFIG_xxx) checks
> for supported architectures.

Does the following look correct?

I will do it like this:

                /* Implement bpf_get_smp_processor_id() inline. */
                if (insn->imm == BPF_FUNC_get_smp_processor_id &&
                    prog->jit_requested && bpf_jit_supports_percpu_insn()) {
                        /* BPF_FUNC_get_smp_processor_id inlining is an
                         * optimization, so if pcpu_hot.cpu_number is ever
                         * changed in some incompatible and hard to support
                         * way, it's fine to back out this inlining logic
                         */
#if defined(CONFIG_X86_64)
                        insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
                        insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
                        insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
                        cnt = 3;
#elif defined(CONFIG_ARM64)
                        struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };

                        insn_buf[0] = cpu_number_addr[0];
                        insn_buf[1] = cpu_number_addr[1];
                        insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
                        insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
                        cnt = 4;
#else
                        goto next_insn;
#endif
                        new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
                        if (!new_prog)
                                return -ENOMEM;

                        delta    += cnt - 1;
                        env->prog = prog = new_prog;
                        insn      = new_prog->insnsi + i + delta;
                        goto next_insn;
                }


>>
>> >
>> >>                 /* Implement bpf_get_smp_processor_id() inline. */
>> >>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>> >>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
>> >> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>> >>                          * changed in some incompatible and hard to support
>> >>                          * way, it's fine to back out this inlining logic
>> >>                          */
>> >> +#if defined(CONFIG_X86_64)
>> >>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>> >>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>> >>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>> >>                         cnt = 3;
>> >> +#elif defined(CONFIG_ARM64)
>> >> +                       struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
>> >>
>> >
>> > this &cpu_number offset is not guaranteed to be within 4GB on arm64?
>>
>> Unfortunately, the per-cpu section is not placed in the first 4GB and
>> therefore the per-cpu pointers are not 32-bit on ARM64.
>
> I see. It might make sense to turn x86-64 code into using MOV64_IMM as
> well to keep more of the logic common. Then it will be just the
> difference of an offset that's loaded. Give it a try?

I think MOV64_IMM would have more overhead than MOV32_IMM and if we can
use it in x86-64 we should keep doing it that way. Wdyt? 

>>
>> >
>> >> +                       insn_buf[0] = cpu_number_addr[0];
>> >> +                       insn_buf[1] = cpu_number_addr[1];
>> >> +                       insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>> >> +                       insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>> >> +                       cnt = 4;
>> >> +#endif
>> >>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
>> >>                         if (!new_prog)
>> >>                                 return -ENOMEM;
>> >> --
>> >> 2.40.1
>> >>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-25 18:55         ` Puranjay Mohan
@ 2024-04-25 20:43           ` Andrii Nakryiko
  2024-04-26 10:26             ` Puranjay Mohan
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2024-04-25 20:43 UTC (permalink / raw)
  To: Puranjay Mohan
  Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf

On Thu, Apr 25, 2024 at 11:56 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>
> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>
> > On Thu, Apr 25, 2024 at 3:14 AM Puranjay Mohan <puranjay@kernel.org> wrote:
> >>
> >> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
> >>
> >> > On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@kernel.org> wrote:
> >> >>
> >> >> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
> >> >> bpf_get_smp_processor_id().
> >> >>
> >> >> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
> >> >>
> >> >> Here is how the BPF and ARM64 JITed assembly changes after this commit:
> >> >>
> >> >>                                          BPF
> >> >>                                         =====
> >> >>               BEFORE                                       AFTER
> >> >>              --------                                     -------
> >> >>
> >> >> int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
> >> >> (85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
> >> >>                                                 (bf) r0 = r0
> >> >
> >> > nit: hmm, you are probably using a bit outdated bpftool, it should be
> >> > emitted as:
> >> >
> >> > (bf) r0 = &(void __percpu *)(r0)
> >>
> >> Yes, I was using the bpftool shipped with the distro. I tried it again
> >> with the latest bpftool and it emitted this as expected.
> >
> > Cool, would be nice to update the commit message with the right syntax
> > for next revision, thanks!
> >
>
> Sure, will do.
>
> >>
> >> >
> >> >>                                                 (61) r0 = *(u32 *)(r0 +0)
> >> >>
> >> >>                                       ARM64 JIT
> >> >>                                      ===========
> >> >>
> >> >>               BEFORE                                       AFTER
> >> >>              --------                                     -------
> >> >>
> >> >> int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
> >> >> mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
> >> >> movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
> >> >> movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
> >> >> blr     x10                                mrs     x10, tpidr_el1
> >> >> add     x7, x0, #0x0                       add     x7, x7, x10
> >> >>                                            ldr     w7, [x7]
> >> >>
> >> >> Performance improvement using benchmark[1]
> >> >>
> >> >>              BEFORE                                       AFTER
> >> >>             --------                                     -------
> >> >>
> >> >> glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
> >> >> arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
> >> >> hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s
> >> >>
> >> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
> >> >>
> >> >> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
> >> >> ---
> >> >>  kernel/bpf/verifier.c | 11 ++++++++++-
> >> >>  1 file changed, 10 insertions(+), 1 deletion(-)
> >> >>
> >> >
> >> > Besides the nits, lgtm.
> >> >
> >> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> >> >
> >> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> >> index 9715c88cc025..3373be261889 100644
> >> >> --- a/kernel/bpf/verifier.c
> >> >> +++ b/kernel/bpf/verifier.c
> >> >> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >> >>                         goto next_insn;
> >> >>                 }
> >> >>
> >> >> -#ifdef CONFIG_X86_64
> >> >> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
> >> >
> >> > I think you can drop this, we are protected by
> >> > bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif
> >> > checks?
> >>
> >> If I remove this and later add support of percpu_insn on RISCV without
> >> inlining bpf_get_smp_processor_id() then it will cause problems here
> >> right? because then the last 5-6 lines inside this if(){} will be
> >> executed for RISCV.
> >
> > Just add
> >
> > #else
> > return -EFAULT;
>
> I don't think we can return.

ah, because it's not an error condition, right

>
> > #endif
> >
> > ?
> >
> > I'm trying to avoid this duplication of the defined(CONFIG_xxx) checks
> > for supported architectures.
>
> Does the following look correct?
>
> I will do it like this:
>
>                 /* Implement bpf_get_smp_processor_id() inline. */
>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
>                         /* BPF_FUNC_get_smp_processor_id inlining is an
>                          * optimization, so if pcpu_hot.cpu_number is ever
>                          * changed in some incompatible and hard to support
>                          * way, it's fine to back out this inlining logic
>                          */
> #if defined(CONFIG_X86_64)
>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>                         cnt = 3;
> #elif defined(CONFIG_ARM64)
>                         struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
>
>                         insn_buf[0] = cpu_number_addr[0];
>                         insn_buf[1] = cpu_number_addr[1];
>                         insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>                         insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>                         cnt = 4;
> #else
>                         goto next_insn;
> #endif

yep, I just wrote a large comment about goto next_insns above and then
saw you already proposed that :) Yep, I think this is the way.

>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
>                         if (!new_prog)
>                                 return -ENOMEM;
>
>                         delta    += cnt - 1;
>                         env->prog = prog = new_prog;
>                         insn      = new_prog->insnsi + i + delta;
>                         goto next_insn;
>                 }
>
>
> >>
> >> >
> >> >>                 /* Implement bpf_get_smp_processor_id() inline. */
> >> >>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
> >> >>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
> >> >> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >> >>                          * changed in some incompatible and hard to support
> >> >>                          * way, it's fine to back out this inlining logic
> >> >>                          */
> >> >> +#if defined(CONFIG_X86_64)
> >> >>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
> >> >>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> >> >>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> >> >>                         cnt = 3;
> >> >> +#elif defined(CONFIG_ARM64)
> >> >> +                       struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
> >> >>
> >> >
> >> > this &cpu_number offset is not guaranteed to be within 4GB on arm64?
> >>
> >> Unfortunately, the per-cpu section is not placed in the first 4GB and
> >> therefore the per-cpu pointers are not 32-bit on ARM64.
> >
> > I see. It might make sense to turn x86-64 code into using MOV64_IMM as
> > well to keep more of the logic common. Then it will be just the
> > difference of an offset that's loaded. Give it a try?
>
> I think MOV64_IMM would have more overhead than MOV32_IMM and if we can
> use it in x86-64 we should keep doing it that way. Wdyt?

My assumption (which I didn't check) was that BPF JITs should optimize
such MOV64_IMM that have a constant fitting within 32-bits with a
faster and smaller instruction. But I'm fine leaving it as is, of
course.

>
> >>
> >> >
> >> >> +                       insn_buf[0] = cpu_number_addr[0];
> >> >> +                       insn_buf[1] = cpu_number_addr[1];
> >> >> +                       insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> >> >> +                       insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> >> >> +                       cnt = 4;
> >> >> +#endif
> >> >>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> >> >>                         if (!new_prog)
> >> >>                                 return -ENOMEM;
> >> >> --
> >> >> 2.40.1
> >> >>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper
  2024-04-25 20:43           ` Andrii Nakryiko
@ 2024-04-26 10:26             ` Puranjay Mohan
  0 siblings, 0 replies; 9+ messages in thread
From: Puranjay Mohan @ 2024-04-26 10:26 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Zi Shen Lim,
	Xu Kuohai, Florent Revest, linux-arm-kernel, linux-kernel, bpf

Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> On Thu, Apr 25, 2024 at 11:56 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>>
>> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>>
>> > On Thu, Apr 25, 2024 at 3:14 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>> >>
>> >> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>> >>
>> >> > On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@kernel.org> wrote:
>> >> >>
>> >> >> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
>> >> >> bpf_get_smp_processor_id().
>> >> >>
>> >> >> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
>> >> >>
>> >> >> Here is how the BPF and ARM64 JITed assembly changes after this commit:
>> >> >>
>> >> >>                                          BPF
>> >> >>                                         =====
>> >> >>               BEFORE                                       AFTER
>> >> >>              --------                                     -------
>> >> >>
>> >> >> int cpu = bpf_get_smp_processor_id();           int cpu = bpf_get_smp_processor_id();
>> >> >> (85) call bpf_get_smp_processor_id#229032       (18) r0 = 0xffff800082072008
>> >> >>                                                 (bf) r0 = r0
>> >> >
>> >> > nit: hmm, you are probably using a bit outdated bpftool, it should be
>> >> > emitted as:
>> >> >
>> >> > (bf) r0 = &(void __percpu *)(r0)
>> >>
>> >> Yes, I was using the bpftool shipped with the distro. I tried it again
>> >> with the latest bpftool and it emitted this as expected.
>> >
>> > Cool, would be nice to update the commit message with the right syntax
>> > for next revision, thanks!
>> >
>>
>> Sure, will do.
>>
>> >>
>> >> >
>> >> >>                                                 (61) r0 = *(u32 *)(r0 +0)
>> >> >>
>> >> >>                                       ARM64 JIT
>> >> >>                                      ===========
>> >> >>
>> >> >>               BEFORE                                       AFTER
>> >> >>              --------                                     -------
>> >> >>
>> >> >> int cpu = bpf_get_smp_processor_id();      int cpu = bpf_get_smp_processor_id();
>> >> >> mov     x10, #0xfffffffffffff4d0           mov     x7, #0xffff8000ffffffff
>> >> >> movk    x10, #0x802b, lsl #16              movk    x7, #0x8207, lsl #16
>> >> >> movk    x10, #0x8000, lsl #32              movk    x7, #0x2008
>> >> >> blr     x10                                mrs     x10, tpidr_el1
>> >> >> add     x7, x0, #0x0                       add     x7, x7, x10
>> >> >>                                            ldr     w7, [x7]
>> >> >>
>> >> >> Performance improvement using benchmark[1]
>> >> >>
>> >> >>              BEFORE                                       AFTER
>> >> >>             --------                                     -------
>> >> >>
>> >> >> glob-arr-inc   :   23.817 ± 0.019M/s      glob-arr-inc   :   24.631 ± 0.027M/s
>> >> >> arr-inc        :   23.253 ± 0.019M/s      arr-inc        :   23.742 ± 0.023M/s
>> >> >> hash-inc       :   12.258 ± 0.010M/s      hash-inc       :   12.625 ± 0.004M/s
>> >> >>
>> >> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>> >> >>
>> >> >> Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
>> >> >> ---
>> >> >>  kernel/bpf/verifier.c | 11 ++++++++++-
>> >> >>  1 file changed, 10 insertions(+), 1 deletion(-)
>> >> >>
>> >> >
>> >> > Besides the nits, lgtm.
>> >> >
>> >> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
>> >> >
>> >> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> >> >> index 9715c88cc025..3373be261889 100644
>> >> >> --- a/kernel/bpf/verifier.c
>> >> >> +++ b/kernel/bpf/verifier.c
>> >> >> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>> >> >>                         goto next_insn;
>> >> >>                 }
>> >> >>
>> >> >> -#ifdef CONFIG_X86_64
>> >> >> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
>> >> >
>> >> > I think you can drop this, we are protected by
>> >> > bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif
>> >> > checks?
>> >>
>> >> If I remove this and later add support of percpu_insn on RISCV without
>> >> inlining bpf_get_smp_processor_id() then it will cause problems here
>> >> right? because then the last 5-6 lines inside this if(){} will be
>> >> executed for RISCV.
>> >
>> > Just add
>> >
>> > #else
>> > return -EFAULT;
>>
>> I don't think we can return.
>
> ah, because it's not an error condition, right
>
>>
>> > #endif
>> >
>> > ?
>> >
>> > I'm trying to avoid this duplication of the defined(CONFIG_xxx) checks
>> > for supported architectures.
>>
>> Does the following look correct?
>>
>> I will do it like this:
>>
>>                 /* Implement bpf_get_smp_processor_id() inline. */
>>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
>>                         /* BPF_FUNC_get_smp_processor_id inlining is an
>>                          * optimization, so if pcpu_hot.cpu_number is ever
>>                          * changed in some incompatible and hard to support
>>                          * way, it's fine to back out this inlining logic
>>                          */
>> #if defined(CONFIG_X86_64)
>>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>>                         cnt = 3;
>> #elif defined(CONFIG_ARM64)
>>                         struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
>>
>>                         insn_buf[0] = cpu_number_addr[0];
>>                         insn_buf[1] = cpu_number_addr[1];
>>                         insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>>                         insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>>                         cnt = 4;
>> #else
>>                         goto next_insn;
>> #endif
>
> yep, I just wrote a large comment about goto next_insns above and then
> saw you already proposed that :) Yep, I think this is the way.
>
>>                         new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
>>                         if (!new_prog)
>>                                 return -ENOMEM;
>>
>>                         delta    += cnt - 1;
>>                         env->prog = prog = new_prog;
>>                         insn      = new_prog->insnsi + i + delta;
>>                         goto next_insn;
>>                 }
>>
>>
>> >>
>> >> >
>> >> >>                 /* Implement bpf_get_smp_processor_id() inline. */
>> >> >>                 if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>> >> >>                     prog->jit_requested && bpf_jit_supports_percpu_insn()) {
>> >> >> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>> >> >>                          * changed in some incompatible and hard to support
>> >> >>                          * way, it's fine to back out this inlining logic
>> >> >>                          */
>> >> >> +#if defined(CONFIG_X86_64)
>> >> >>                         insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>> >> >>                         insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>> >> >>                         insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>> >> >>                         cnt = 3;
>> >> >> +#elif defined(CONFIG_ARM64)
>> >> >> +                       struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) };
>> >> >>
>> >> >
>> >> > this &cpu_number offset is not guaranteed to be within 4GB on arm64?
>> >>
>> >> Unfortunately, the per-cpu section is not placed in the first 4GB and
>> >> therefore the per-cpu pointers are not 32-bit on ARM64.
>> >
>> > I see. It might make sense to turn x86-64 code into using MOV64_IMM as
>> > well to keep more of the logic common. Then it will be just the
>> > difference of an offset that's loaded. Give it a try?
>>
>> I think MOV64_IMM would have more overhead than MOV32_IMM and if we can
>> use it in x86-64 we should keep doing it that way. Wdyt?
>
> My assumption (which I didn't check) was that BPF JITs should optimize
> such MOV64_IMM that have a constant fitting within 32-bits with a
> faster and smaller instruction. But I'm fine leaving it as is, of
> course.

You are right. I verified that the JITs will optimize this if the imm is
32-bit. So, I will make it common in the next version.

Also, for the readers, we are discussing:

1) BPF_MOV32_IMM : This moves a 32 bit imm into a register and
                   zero-extends it.

2) BPF_LD_IMM64 : This moves(loads) a 64 bit imm into a register. The
                  JITs will optimize this to a BPF_MOV32_IMM, if the imm
                  is 32-bit.

Not to be confused with :
3) BPF_MOV64_IMM: This also works with a 32-bit imm but will sign extend
                  it to 64-bit rather than zero-extend.

Thanks,
Puranjay

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-04-26 10:26 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-24 17:35 [PATCH bpf-next v2 0/2] bpf, arm64: Support per-cpu instructions Puranjay Mohan
2024-04-24 17:35 ` [PATCH bpf-next v2 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs Puranjay Mohan
2024-04-24 17:35 ` [PATCH bpf-next v2 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper Puranjay Mohan
2024-04-24 22:01   ` Andrii Nakryiko
2024-04-25 10:14     ` Puranjay Mohan
2024-04-25 18:09       ` Andrii Nakryiko
2024-04-25 18:55         ` Puranjay Mohan
2024-04-25 20:43           ` Andrii Nakryiko
2024-04-26 10:26             ` Puranjay Mohan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).